Archive for the ‘Software Development’ Category



On Windows 8, a call to "await aMessageDialog.ShowAsync()" can only be made once, otherwise System.UnauthorizedAccessException will be thrown (E_ACCESSDENIED 80070005). This is a helper method to display dialogs although it's not thread-safe. It's inspired on StackOverflow answers:

  1. public static class DialogDisplayer {
  2. private static IAsyncOperation currentlyShownDialog;
  3.  
  4. public static async Task TryToShowDialog(MessageDialog messageDialog){
  5. try{
  6. RequestPreviousDialogCancelation();
  7. await WaitForUIThreadToBeReady();
  8. await ShowDialog(messageDialog);
  9. }
  10. catch (TimeoutException ex){
  11. //Logger.Information("Time out waiting for a MessageDialog to be closed");
  12. }
  13. catch (TaskCanceledException ex){
  14. CancelDialog();
  15. }
  16. catch (UnauthorizedAccessException ex){
  17. //Logger.Information("Multiple dialogs are being opened at the same time. There is a direct call to ShowAsync somewhere. Instead of using ShowAsync, use this method");
  18. }
  19. }
  20.  
  21. private static void CancelDialog(){
  22. currentlyShownDialog = null;
  23. }
  24.  
  25. private static void RequestPreviousDialogCancelation(){
  26. if (IsThereAnyOpenedDialog()){
  27. currentlyShownDialog.Cancel();
  28. }
  29. }
  30.  
  31. private static async Task ShowDialog(MessageDialog messageDialog){
  32. currentlyShownDialog = messageDialog.ShowAsync();
  33. await currentlyShownDialog;
  34. }
  35.  
  36. private static async Task WaitForUIThreadToBeReady(){
  37. var attempts = 0;
  38. while (IsThereAnyOpenedDialog()){
  39. await Task.Delay(TimeSpan.FromMilliseconds(100));
  40. attempts++;
  41. if (attempts > 5){
  42. throw new TimeoutException();
  43. }
  44. }
  45. }
  46.  
  47. private static bool IsThereAnyOpenedDialog(){
  48. return currentlyShownDialog != null && currentlyShownDialog.Status == AsyncStatus.Started;
  49. }
  50. }
  51.  

Usage:

  1. var messageDialog = new MessageDialog("Hello world");
  2. await DialogDisplayer.TryToShowDialog(messageDialog);
  3.  

We had a kind of duplication in our tests that we didn't know how to deal with. The refactoring "Introduce Polymorphic Creation with Factory Method" explained by Joshua Kerievsky in his brilliant book "Refactoring to Patterns" gave me the solution to avoid duplicated tests.

  1. [TestFixture] public class
  2. ChangingColorWithImplicitExclusionsShould : ConfigurationTests {
  3. [Test] public void
  4. not_allow_change_when_its_compulsory_has_the_same_family_than_a_configured_equipment() {
  5. CatalogBuilder
  6. .AddEquipmentWithFamily("PR1", "Radio")
  7. .AddEquipmentWithFamily("PR2", "Radio")
  8. .AddColor("red", WithEquipmentsCompulsory("PR2"));
  9. var configuration = Agiven.ModelConfiguration()
  10. .With(Agiven.ConfigEquipment("PR1"))
  11. .Build();
  12.  
  13. Expect.CallTo(() => ExecuteChangeColor("red", configuration))
  14. .ToThrow<EquipmentsWithSameFamilyException>();
  15. }
  16. }
  17.  
  18. [TestFixture] public class
  19. ChangingInteriorWithImplicitExclusionsShould : ConfigurationTests {
  20. [Test] public void
  21. not_allow_change_when_its_compulsory_has_the_same_family_than_a_configured_equipment() {
  22. CatalogBuilder
  23. .AddEquipmentWithFamily("PR1", "Radio")
  24. .AddEquipmentWithFamily("PR2", "Radio")
  25. .AddInterior("xyz", WithEquipmentsCompulsory("PR2"));
  26. var configuration = Agiven.ModelConfiguration()
  27. .With(Agiven.ConfigEquipment("PR1"))
  28. .Build();
  29.  
  30. Expect.CallTo(() => ExecuteChangeInterior("xyz", configuration))
  31. .ToThrow<EquipmentsWithSameFamilyException>();
  32. }
  33. }
  34.  

Tests are very similar, the differences are in lines 8 and 25, and also in lines 13 and 30. First tests tries to change the color of a configuration whereas the second one tries the interior. Part of the handling business logic is the same. This is just one scenario but we had many of them, with same expected behavior for color, interior, equipment, and more. Eventually there was a lot of "duplication".

After refactoring, we have a base abstract class with the tests, exposing template methods that child classes have to implement in order to populate the catalog and also to execute the corresponding action:

  1. [TestFixture] public abstract class
  2. CantChangeConfigurationBecauseThereisImplicitExclusionWhen : ConfigurationTests {
  3. [Test] public void
  4. its_compulsory_has_the_same_family_than_a_configured_equipment() {
  5. CatalogBuilder
  6. .AddEquipmentWithFamily("PR1", "Radio")
  7. .AddEquipmentWithFamily("PR2", "Radio");
  8. AddImplicitExclusionItemWithCompulsories("PR2");
  9. var configuration = Agiven.ModelConfiguration()
  10. .With(Agiven.ConfigEquipment("PR1"))
  11. .Build();
  12.  
  13. Expect.CallTo(ChooseConflictiveItem(configuration))
  14. .ToThrow<EquipmentsWithSameFamilyException>();
  15. }
  16.  
  17. protected abstract void AddImplicitExclusionItem(string code);
  18. protected abstract Func<ModelConfiguration> ChooseConflictiveItem(ModelConfiguration configuration);
  19. }
  20.  
  21. [TestFixture] public class
  22. ChangingColor : CantChangeConfigurationBecauseThereisImplicitExclusionWhen
  23. private const string itemCode = "irrelevant";
  24.  
  25. protected override void AddImplicitExclusionItem(string code){
  26. CatalogBuilder
  27. .AddColor(itemCode, WithEquipmentsCompulsory(code));
  28. }
  29.  
  30. protected override Func<ModelConfiguration> ChooseConflictiveItem(ModelConfiguration configuration){
  31. return () => ExecuteChangeColor(itemCode, configuration);
  32. }
  33. }
  34.  

The base class "ConfigurationTests" contains just helper methods such as ExecuteChangeColor, or ExecuteChangeInterior, but no tests at all. Otherwise tests would run twice.

When it comes to refactoring, my preferred katas consist of experimentation with the actual code base I am working on. I just create a new branch from a certain commit, play with several refactorings and then throw it away. I usually end up with several experimental branches starting from the same commit. Sometimes if I end up with a better design, I apply the changes to the default or master branch, but that is not the goal. The goal is to improve my refactoring and design skills.

Pretty much every day, as I am coding, I find out slices of the code that smell. I find out room for a better design. However, if the code is working fine, I mean, if there are no known defects, if there is no real need to change, it's probably not worth spending work time to refactor it. At least not now. But I do write down a note about the smell to think about it later. This note often becomes the subject of the code kata.

When you approach your code base as a code kata, you don't have to care about time, you can just spend as much time as you want enjoying and learning. That code deals with a degree of complexity that is often hard to find in code katas. It's a real challenge. A big chance to improve your skills. The fact that I can experiment without any pressure, sometimes lead me to much better designs that the code base end up benefiting from. Eventually it's also good for my customers and colleagues.

Try it out and let me know whether you find it useful 😉

Learning with Peter

Last week I was lucky to host my good friend Peter Kofler in his visit to Gran Canaria, where he came to facilitate the Glodal Day of Code Retreat and also to work together a couple of days in our biggest project at the moment.

We've been working in the same project for a year now, our team joined the client's team to play several roles, from mentoring to developing features ourselves. Peter's visit was a fantastic learning experience and also a way to bring back principles that wear out as we face recurring issues over time. Even though we started as external consultants, the fact that we've been a year immerse in the project sometimes leading the development ourselves, changes our vision and focus. Peter's fresh vision reminded me of myself a year ago, when the gig started and reinforced my willingness to stick with my principles. Thank you Peter!

Notes that I wrote down during working sessions:

  • Code metrics is exactly what we need now to show everyone the health of the code base. We have enough code already and we must improve the readability of some tests, reorganise dependencies, clean up namespaces and some other things we know are important. A tool like Sonar of something like that will provide us with metrics to measure the improvements over time. It's definitely the time to visualize code metrics
  • Dependencies diagrams are another type of metric that we should use now that. A quick overview of how the various namespaces are related to each other.
  • There are certain areas of the hexagon suffering from "primitive obsession" code smell. We want more objects within our business domain, and less primitives.
  • More information in commit's comments. Apart from the refactoring we apply or the name of the tests turning green or red, I'll try to explain shortly what is the benefit of every single commit. The reason to change the code. I've found myself searching for particular design decisions in the commits' history and realized that the information in the comments was not enough to find out what I wanted. Example: why I moved a method to a class, extracted a class... the "why" is not in the code so it's a good idea to add it in the form of comments in the version control system.
  • I want to learn how to work with Visual Studio without the mouse, because it's faster with the keyboard only. This is an old resolution that I want to approach now.
  • I realised that our style of "mob programming" is different to the original style proposed by Woody Zuill and his team. In the original proposal, the driver is mostly "keyboarding", focusing on typing what the navigators tell him to do. Our approach is different, we have the discussion and the person who clearly sees how to translate the idea into code jumps in the keyboard and expresses it in code, no one says to the driver what should be written. The driver leads the session when he is at the keyboard. It's sometimes easier to express an idea writing some code or pseudo-code, than trying to explain how the code is going to look like, and I think this has to do with the fact that talking constantly exhausts me. It also has to do with focus, the driver is the person who can implement the discussed idea faster, who knows the code better. Whenever the driver is stuck, the navigator who knows how to carry on, asks for the keyboard and becomes the driver. I find dictating awkward and sometimes irritating. So we are not following their rule: "for an idea go from your head into the computer it MUST go through someone else’s hands.". We try not to talk whilst the driver is typing just like in pair programming, because we want the driver to be able to listen what navigators have to say, and I can't type and listen at the same time. Perhaps we are not doing mob programming at all. But it works for us.
  • In order to keep the good energy and motivation in a mob or pair prog session, it's better to give up or take a break than working without feeling like it. I need to feel that my colleagues feel like working and thus I must feel like doing so too.
  • I really like the way Peter summarized the Hexagonal Architecture, focusing on the direction of the dependencies. A simple diagram with two boxes joined by an arrow pointing from one to the other, was enough for people to understand it. Also the onion was a good idea.

Regarding the Global Day of Code Retreat, there are a bunch of things I learned from Peter and from the day itself:

  • If there are no sponsors buying food, I want to buy food myself for me and for all the participants. This year I brought the food in the afternoon but I should have brought it in the morning, when people were hungry. Next time I'll arrive in the morning with breakfast for everyone. I missed the relaxing moment of the morning break sharing some fruit and coffee or tea.
  • Whenever I talk very serious aiming to remark something important, I look like I am rude. I don't realize that until I see people's faces and by then I feel bad about it. To avoid this situation I'll think twice before sending the message. I must be quiet, talk more slowly, think from a positive angle and use positive words. Words like "no" or "not" or "don't" may be negative. I want to read about Non Violent Communication and practice it.
    As an example, during the retreat in one iteration's retrospective I said that the facilitators (Peter and me) were not there to teach. My aim was to encourage people to learn from themselves rather than expecting some kind of master class or demo from our side, but the way I said it was understood as ... "I am not here to help you, or I don't care about you" as some people told me afterwards. It had a negative impact in some people's motivation.
  • At the end of the day, after the final retrospective, in the pursuit of feedback and appreciation I talked in such an unfortunate way that made people feel in debt with us. But again I didn't realise until I saw people's reaction. My aim was to discover what benefits did participants find out during the day, what did they take away so that next time I could maximize it with specific actions or exercises.
    When it's obvious that I've dedicated time and energy working for others, I must be very careful not to express that explicitly because it will make people feel in debt which is not what I wanted.

Several friends have spoken to me about Non Violent Communication (NVC) in the last 5 years or so, I believe Diego Rojas was the first one, working with Peter was a trigger point for me to really start digging into the subject. Thank you for the recommendation Peter and for this excellent video by Marshall Rosenberg, the father of NVC:

C#, Java and other languages have the same behaviour when it comes to reference types.

  1. public class SomeClass {
  2. public string someField;
  3. }
  4.  
  5. var instance1 = new SomeClass(); // instance1 is a reference to an object in memory
  6. var instance2 = instance1; // instance2 is a copy of the reference
  7. instance2.someField = "changed";
  8. instance1.someField == instace2.someField // -> true
  9. instace2 = new SomeClass();
  10. instance2.someField = "changed again";
  11. instance1.someField != instance2.someField // -> true -> they are different objects
  12. instance1.someField; // -> "changed" -> nothing changed
  13.  

The dot symbol after the variable name (instance1 or instance2) accesses the actual
object referenced by that variable. So before the dot, we have a variable referencing an object, and
after the dot we have the actual object.

  1. instance1.someField;

Means: get reference instace1, then access the object, then access someField

Passing objects to functions has exactly the same behaviour, function parameters behave like variables assigned to the original arguments.

  1. public static void SomeMethod(SomeClass arg1){
  2. arg1.someField = "changed";
  3. }
  4.  
  5. var instance1 = new SomeClass();
  6. SomeMethod(instance1);
  7. instance1.someField; // -> "changed" -> field has changed
  8.  
  9. public static void OtherMethod(SomeClass arg1){
  10. arg1 = new SomeClass();
  11. }
  12.  
  13. var instance1 = new SomeClass();
  14. instance1.someField = "changed";
  15. OtherMethod(instance1);
  16. instance1.someField; // -> "changed" -> nothing changed
  17.  

Instances of SomeClass are mutable because the value of someField may be changed.
This mutation may happen mistakenly as a result of an uncontrolled access to the object via some copy of its reference, causing unexpected side effects like defects and memory leaks.
As long as the application code can reach an object - has some reference to it - the garbage collector can't free the memory allocated for that object. Short version of our desktop app architecture as an example:

  1. public static class EventBus{
  2. private static ISet<Subscriber> subscribers = new HashSet<Subscribers>();
  3. public void AddSubscriber(Subscriber subscriber){
  4. subscribers.Add(subscriber);
  5. }
  6. ...
  7. }
  8.  
  9. public class View{
  10. public ViewModel ViewModel;
  11.  
  12. public View(ViewModel viewModel){
  13. ViewModel = viewModel;
  14. }
  15. public void Init(){
  16. EventBus.AddSubscriber(ViewModel);
  17. }
  18. }
  19.  

The life cycle of the View instance is controlled by the framework, not by us. It may create a new instance every time the view is shown on screen and destroy the instance as it disappears. However we are adding a reference
to the static list of subscribers in the EventBus. As long as the subscribers list is not flushed, the garbage collector won't be able to set memory free for ViewModel and View instances, even though the view may not be even displayed. Opening that view many times will increase the memory consumption every time, that is a memory leak. In this particular case we unsubscribe the instance from the bus before hiding the view:

  1. public class View{
  2. public ViewModel ViewModel;
  3.  
  4. public View(ViewModel viewModel){
  5. ViewModel = viewModel;
  6. }
  7. public void Init(){
  8. EventBus.AddSubscriber(ViewModel);
  9. }
  10. public void Clear(){
  11. EventBus.RemoveSubscriber(ViewModel);
  12. }
  13. }
  14.  

In the case of the bus there isn't much we can do to avoid having two references to the same object in two different places, we have to be aware of this behavior. In C# there is the concept of Weak Reference but as far as I know we can't use it on WinRT (tablets).

In some other cases though we may avoid side effects:

  • Avoid more than one reference per object, avoid state
  • Keep variables local, avoid instance variables (fields)
  • When the object is a value object, design it to be immutable
  • In the case of collections, use ReadOnlyCollection when they must keep their size
  • If the object can't be designed immutable but you need to avoid state changes at all cost, clone the object returning a deep copy of it

We may be tempted to clone objects every time someone asks for a reference to them. However this may not be possible (like with the EventBus) or it may be too expensive and complex. I'd say that cloning is the last alternative and perhaps the need for it is a design smell. Who is responsible for ensuring that object references are not causing memory leaks? It boils down to the "less surprise" principle. We should design interfaces (methods) in such a way that it's obvious how references and state are going to be managed. If it looks like the consumer (the one asking for the reference) will not be aware of it and the consumer will likely make undesired changes to the object, then cloning the object could be a good defensive approach. But I would rather try to think how to make the API more expressive considering context and level of abstraction. I assume that the caller understands how references work.

If you, dear reader, provide some code examples I will be able to clarify my point with code. I'll update this post if I come up with some snippets.

In order to learn from others I must trust them and open my mind enough to see them as potential teachers even if  they are - apparently - less experienced than me in certain knowledge area. The lessons to be learned may not necessarily come from the things others say or do but from the synergy that comes out a trustworthy relationship. It could be something I say myself as an answer to an unexpected question, one that makes me reflect from a different angle. If I believe I can't learn from other person then I am actually building a mental barrier that will prevent me from learning anything at all. No matter if I think the other person's idea is wrong, my attitude should be open enough to let him do and show me how he works in a way that he feels encouraged to do so, in a safe and collaborative atmosphere. In the case am completely sure that the idea (approach or technique) is not appropriate because I've tried it before and failed (even many times), I may suggest not to go there but still give the others the chance to fail themselves: "Do you really think that is the best way to go?, I don't think it is but if you definitely want to try let's do it". Using questions rather than imperative sentences may instill the right amount of uncertainty in others so that they may also be open minded and discuss better ways.

To build a trustworthy relationship one must learn to listen to others. Be willing to listen to others is the first step. Everyone deserves attention, everyone has stories that are worth listening, and so I do. By letting others express themselves I am giving myself the same margin of trust and tolerance that will make me feel comfortable, creative and valuable when talking to others.

Valuable information is not only in what people say but more importantly, in how they say it. When someone criticizes a third person who is not present during our conversation, that's an opportunity for me to know my speaker better. That third person is pretty much irrelevant in the conversation, the important information comes in the way the speaker expresses her feelings, which let me know about her current mental state and the trigger points that make her upset, annoyed or whatever the feelings are. It's a chance to understand my speaker better and be empathetic. Also the fact that she is verbalizing her thoughts may help her listen to herself and realize that she went too far, this is, verbalizing thoughts may break the negative loop. At least it happens to me, when words are already off my mouth there is no way I can hide, the commitment is done and I end up on a different path often feeling sorry about it.

Some people need silent moments, even "uncomfortably" long ones, in order to start talking. I have to discover what are the conditions under which he will feel safe enough to bring what he knows or whatever he's to say.

Listening to people with full attention requires quite a lot of energy. It's important that I tell them when I am running out of energy so that I can make a break. A few minutes of break per hour make a big difference. Recognizing the right time to leave the conversation for tomorrow is also crucial.

EventBus in a Windows 8 app

eventBusWinAppHow to communicate different pages of a Windows 8 App? How to manage the life cycle of the pages?

Pages are instantiated by the framework when asked to navigate:

*.xaml.cs:

  1. frame.Navigate(typeof(MyPage));

It will be a new instance of MyPage everytime unless the pages are cached. To set up the page cache, add this line to the beginning of

MyPage.xaml:

  1. <Page
  2. x:Name="pageRoot"
  3. x:Class="MyNamespace.MyPage"
  4. NavigationCacheMode="Enabled"
  5. ...
  6.  

 
Cache can be clear as described here as long as the NavigationCacheMode is "Enabled" and not "Required". The "Required" pages can't be deleted from the cache as explained by Andreas Hammar.

The page's life cycle is controlled by the framework although we can configure the cache, therefore my objects - those whose life cycle is controlled by myself - should not reference pages in order to avoid memory leaks and undesired side effects.

We've decided that Pages and other UserControls are the ones which create and manage ViewModel's life cycle (we're using MVVM pattern). The page's codebehind (MyPage.xaml.cs) contains a reference to ViewModel.cs which is a plain old C# object that contains the GUI logic. MyPage instantiates the ViewModel in its constructor. So there should be no references to ViewModels from other objects.

MyPage.xaml.cs

  1. public MyPage(){
  2. InitializeCompontent();
  3. ViewModel = new ViewModel();
  4. DataContext = ViewModel;
  5. }

However I can't avoid references to the ViewModel from the EventBus (simple pubsub static class) because it's the mechanism we are using to communicate different parts of the application:

ViewModel.cs:

  1. public void SubscribeToEventBus(){
  2. EventBus.Subscribe(this); // static method
  3. }
  4. public void UnsubscribeEventBus(){
  5. EventBus.Unsubscribe(this); // static method
  6. }

To work around this problem, I make sure the ViewModel contains references to the bus only when it's active (visible on screen):

MyPage.xaml.cs:

  1. public override void OnNavigatedTo(){
  2. ViewModel.SubscribeToEventBus();
  3. ViewModel.Initialize(); // optional
  4. }
  5. public override void OnNavigatedFrom(){
  6. ViewModel.UnsubscribeEventBus();
  7. }

When the ViewModel requires some data previously generated in another view and stored in the AppState, it raises an event through the bus saying it's loaded and then the AppState listener replies back through the bus sending the required data.

We haven't found a built-in way to reset the application clearing all data so we've implemented a reset method. The reset is an event triggered by a button in the bottom bar, sent through the bus. Our App.xaml.cs object which is the singleton that starts the application handles the event clearing all data. The sequence is important to avoid memory leaks

App.xaml.cs:

  1. public void Handle(AppReset appResetArgs){
  2. EventBus.Clear();
  3. AppState.Reset();
  4. InitializeServices();
  5. EventBus.Subscribe(this);
  6. ResetXamlCache(); // this must be the last thing to clear
  7. NavigateToRootPage();
  8. }

Only the App object subscribes to the AppReset event. Other global events like a connection failure are also handled by the App object.

A final reflection is,... Why do we need an EventBus at all? We could just inject the same AppState instance in every ViewModel to share data among the different screens. Something like a NavigationService could have solved this for us. The following pseudocode illustrates the idea:

NavigationService.cs:

  1. public static void NavigateTo<T>(){
  2. var frame = (Frame) Window.Current.Content;
  3. var viewModel = Factory.ViewModelFor<T>();
  4. frame.Navigate(typeof (T), viewModel);
  5. }

MyPage.xaml.cs

  1. protected override void OnNavigatedTo(NavigationEventArgs e){
  2. ViewModel = e.Parameter as ViewModel;
  3. }

Then in the Factory we could inject the same AppState to all ViewModels and manage their life cycle. This indirection level could have changed out architecture. A custom base paged could implement the "OnNavigatedTo" to avoid duplication.

Thanks to my friend Juan M. Gomez for the idea of the navigation service.

During Socrates Conference 2015 we decided that it's the right time to jump in ES6 to developer a green field project that our customer is starting. Given that ES6 is already the stable and latest version of JavaScript, it does not make sense to start a new project with ES5, an already old version of the language. With the kind of functional style that we use when coding in JavaScript, the changes are not too big anyway as we are not going to use classes anyway. But we can leverage from block scope with "let" and "const" avoiding the use of "var" from now on. Configuring the tools has taken more time than I thought as the tooling and the ecosystem is changing really fast. You read a recipe from a blog post which is 6 months old and it turns out that most of the stuff described is no longer working. Some npm packages don't work anymore or the behavior at some point is very different from a previous version... apparently things are even more obscure on Windows which is the platform we have to use for this project.

As an example, I installed karma via npm the latest version. But when running gulp from the command line it stopped working with no error message at all, just a line break and back to the prompt. I commented all the lines in the gulpfile.js and then uncommented lines one by one executing gulp each time to discover that "require('karma')" was the reason. So I got into the node repl and type this myself:

var k = require('karma')

The result was the same, node repl exited silently getting me back to the command line prompt. I couldn't find a single way to catch the error although I tried try-catch, signal capturing, domains... and none of that worked. Then I started downgrading the version of the karma package until it worked for me. Version 0.13.3 works but 0.13.4 doesn't. It must be a very specific problem on my machine but I couldn't find any other solution. Eventually we are not using karma for now, we are using jasmine stand alone version and mocha.

This is the simplest gulpfile I was able to get working:

  1. var gulp = require('gulp');
  2. var sourcemaps = require("gulp-sourcemaps");
  3. var mainBowerFiles = require('main-bower-files');
  4. var browserify = require('browserify');
  5. var vinylSource = require('vinyl-source-stream');
  6. var glob = require('glob');
  7. var vinylBuffer = require('vinyl-buffer');
  8. var babel = require('babelify');
  9.  
  10. var globalNamespace = 'mynamespace';
  11. var launcher = './Scripts/src/main.js';
  12. var sourceFiles = 'Scripts/src/*.js';
  13. var specFiles = 'Scripts/spec/*.js';
  14. var libFiles = 'Scripts/dist/vendor/*.js';
  15. var distFolder = "Scripts/dist";
  16. var allFiles = [libFiles, sourceFiles, specFiles];
  17.  
  18. gulp.task('package-src', function () {
  19. var filenames = glob.sync(sourceFiles); // generate array of filenames
  20. return browserify({
  21. entries: filenames,
  22. debug: true
  23. })
  24. .require(launcher, {expose: globalNamespace}) // publish
  25. .on('error', function (err) {
  26. console.error(err); this.emit('end');
  27. })
  28. .transform(babelify)
  29. .bundle()
  30. .pipe(vinylSource('all.js')) // generated output file
  31. .pipe(vinylBuffer()) // required for sourcemaps
  32. .pipe(sourcemaps.init())
  33. .pipe(sourcemaps.write("."))
  34. .pipe(gulp.dest(distFolder));
  35. });
  36.  
  37. gulp.task('package-tests', function (done) {
  38. var filenames = glob.sync(specFiles);
  39. return browserify({
  40. entries: filenames,
  41. debug: true
  42. })
  43. .on('error', function (err) {
  44. console.error(err);
  45. this.emit('end');
  46. })
  47. .transform(babelify)
  48. .bundle()
  49. .pipe(vinylSource('specs.js'))
  50. .pipe(vinylBuffer())
  51. .pipe(sourcemaps.init())
  52. .pipe(sourcemaps.write("."))
  53. .pipe(gulp.dest(distFolder));
  54. });
  55.  
  56. gulp.task('package-vendor', function () {
  57. return gulp.src(mainBowerFiles({filter: '**/*.js'}))
  58. .pipe(gulp.dest(distFolder + '/vendor/'));
  59. });
  60.  
  61. gulp.task('default', [
  62. 'package-src',
  63. 'package-vendor',
  64. 'package-tests'
  65. ]);

The generated package is "all.js" which I include in the html page. The application's entry point is on main.js with exposes a function called startApp.

App starts up at the bottom of the html page:

  1. <script>
  2. var mynamespace = require('mynamespace'); // require function is provided by browserify, no need to include require.js
  3. mynamespace.startApp();
  4. </script>

main.js:

  1. import viewModel from "./viewModel";
  2. import restClient from "./restClient";
  3.  
  4. export function startApp() {
  5. let rc = restClient();
  6. let vm = viewModel(rc);
  7. ko.applyBindings(vm);
  8. };
  9.  

In order to run the tests the most simple choice was Jasmine stand alone, including the generated "specs.js" file in the SpecRunner.html page. As the tests include the production code, the generated file "specs.js" already include all the production code.

tests:

  1. import restClient from "../src/restClient";
  2. import viewModel from "../src/viewModel";
  3.  
  4. describe("make an vehicle order", ()=> {
  5. ...
  6.  

The next step was to include "watchify" in order to rebundle everytime a file is saved.

gulpfile.js:

  1. var gulp = require('gulp');
  2. var sourcemaps = require("gulp-sourcemaps");
  3. var mainBowerFiles = require('main-bower-files');
  4. var browserify = require('browserify');
  5. var vinylSource = require('vinyl-source-stream');
  6. var glob = require('glob');
  7. var vinylBuffer = require('vinyl-buffer');
  8. var watchify = require('watchify');
  9. var babelify = require('babelify');
  10.  
  11. var launcher = './Scripts/src/main.js';
  12. var globalNamespace = 'mynamespace';
  13. var sourceFiles = 'Scripts/src/**/*.js';
  14. var sourceBundle = 'all.js';
  15. var specFiles = 'Scripts/spec/**/*.js';
  16. var specBundle = 'specs.js';
  17. var libFiles = 'Scripts/dist/vendor/*.js';
  18. var distFolder = 'Scripts/dist';
  19. var allFiles = [libFiles, sourceFiles, specFiles];
  20.  
  21. gulp.task('package-src-dev', function() {
  22. bundleWatchify(sourceFiles, sourceBundle);
  23. });
  24.  
  25. gulp.task('package-src', function() {
  26. bundle(sourceFiles, sourceBundle);
  27. });
  28.  
  29. gulp.task('test-dev', function () {
  30. bundleWatchify(specFiles, specBundle);
  31. });
  32.  
  33. gulp.task('package-vendor', function () {
  34. return gulp.src(mainBowerFiles({filter: '**/*.js'}))
  35. .pipe(gulp.dest(distFolder + '/vendor/'));
  36. });
  37.  
  38. gulp.task('default', ['package-src', 'package-vendor', 'test-dev']);
  39. gulp.task('package-dist', ['package-src', 'package-vendor']);
  40.  
  41. function bundleWatchify(sources, output) {
  42. var watchified = watchify(doBrowserify(sources))
  43. .on('update', function (filenames) {
  44. console.log('rebuilding -> ', filenames[0]);
  45. rebundle(watchified, output);
  46. });
  47. return rebundle(watchified, output);
  48. }
  49.  
  50.  
  51. function bundle(sources, output) {
  52. return rebundle(doBrowserify(sources), output);
  53. }
  54.  
  55. function doBrowserify(sources) {
  56. var filenames = glob.sync(sources);
  57. var browserified = browserify({
  58. entries: filenames,
  59. debug: true
  60. });
  61. return browserified;
  62. }
  63.  
  64. function rebundle(b, output) {
  65. return b
  66. .require(launcher, { expose: globalNamespace })
  67. .on('error', function (err) {
  68. console.error(err);
  69. this.emit('end');
  70. })
  71. .transform(babelify)
  72. .bundle()
  73. .pipe(vinylSource(output))
  74. .pipe(vinylBuffer()) // required for sourcemaps
  75. .pipe(sourcemaps.init())
  76. .pipe(sourcemaps.write("."))
  77. .pipe(gulp.dest(distFolder));
  78. }
  79.  

This post has been written on September 14th 2015, if you try to use any of the snippets posted a few months later they probably won't work for you. Versions used:

package.json:

  1. {
  2. "name": "VehicleOrders",
  3. "version": "1.0.0",
  4. "description": "",
  5. "main": "gulpfile.js",
  6. "devDependencies": {
  7. "babel": "5.8.23",
  8. "babelify": "6.3.0",
  9. "browserify": "11.0.1",
  10. "glob": "5.0.14",
  11. "gulp": "3.9.0",
  12. "gulp-babel": "5.2.1",
  13. "gulp-concat": "2.4.1",
  14. "gulp-sourcemaps": "1.5.2",
  15. "main-bower-files": "2.8.0",
  16. "vinyl-buffer": "1.0.0",
  17. "vinyl-source-stream": "1.1.0",
  18. "vinyl-transform": "1.0.0",
  19. "watchify": "3.4.0"
  20. },
  21. "scripts": {
  22. "test": "echo \"Error: no test specified\" && exit 1"
  23. },
  24. "author": "AIDA",
  25. "license": "ISC"
  26. }

Notes from Socrates 2015

The 5th Socrates Conference in Germany has been my first continental Socrates Conference (I've been to UK twice plus another one in the Canaries). What a mind blowing unconference! I got even more value from Socrates than I expected! this has been the biggest in size ever with almost 190 people. There were so many people that it felt like there were 3 conferences going on at the same time. Even in my last couple of hours in the venue I still found new faces, people I thought I didn't see before. Despite of so many people the atmosphere was excellent as usual and the organization brilliant. Congratulations to the organisers! everything was organised with German precision! Danke schön 😉
Unfortunately Andreas - one of the founders - couldn't join us but I am glad he is recovering well.

I've learned many things, met old friends (although I wanted to spend more time with them!) and made new ones.

I have to thank my friend Ivan (@istepaniuk) for being our host and guide in Amsterdam and for driving us to Soltau. Thank you Imo (@kabomi), Juan (@_jmgomez_) and Adrian (@eidrien) for your fantastic company.

Thank you Pierluigi Pugliese (@p_pugliese) for your excellent job as facilitator and the conversation/coaching we had during lunch time.
Notes:

  • Twitter hashtag was #socrates15
  • Really like the idea of the world cafe on Thursday to get people thinking on what they would like to happen the next days.
  • There are so many proposals that is hard to remember some sessions when looking at the panel, that makes it even harder to choose sessions. Next time I'll write down in paper exactly the minimum and most effective sentences I'll need to present my proposal so that I'll just have to read when the time arrives. Talking to the mic is a bit unconfortable when not used to it.
  • I'll book more time to spend with my friends and specially with my colleagues.
  • In the sessions intended for me to ask questions, to be taught something or to have a discussion is not necessary to have slides but it's worth preparing the session. It could be a script, a few examples to discuss on, a set of questions... something to facilitate the session and an idea of what to achieve with it.
  • Some of the most valuable moments happen out of the scheduled time table or in those with a small group of people. Next time I'll plan to spend about half of each day out of the rooms, talking and coding with people or even coding on my own in places where others can see me and join.
  • I will propose maximum one session per day, then I'll feel more relaxed and free to change plans.
  • Sleeping enough is key to enjoy the event until the end, otherwise I am already a zombie at the end of the first day. Nightly discussions are super interesting, I'll probably oversleep in the morning at least one day to be fresh and ready for the night.
  • Socrates is an open space where I can go without any session prepared, people are so amazing that there will be for sure a lot of stuff to learn.
  • The idea of workshops on Sundays is brilliant we'll do the same in Socrates Canaries. I've learned many things on Sunday working on a walking skeleton and solved many doubts regarding DDD. Special thanks to Bardford (@gruebelinchen), Björn, Felipe (@felipefzdz), Martin (@md42), Dimitry (@dimitrypolivaev), Ivan and all the people that stepped in and out during the workshop.
  • It's the right time for our team to embrace ES6 and use it in the project we are starting from scratch. Thank you Wolfram (@wolframkriesing), Marco (@marcoeimrich) and Christian (fonzygruen) for all the insight and information. Thank you also for ES6Katas and Ramda.js - we're gonna use both. Thank you also for valuable resources like NodeSchool and the refactoring kata (refactoring to functional style using Rambda) proposed by Marco.
  • Property-based testing is definitely something I want to try. Thank you for the insight and opinions on when to use it and when not to use it, specially to Pawel  (@pawelduda) and Kuba.
  • The theory of type systems is super interesting, I would like to get the slides from Ivan (@ikusalic) and compare how Type Classes in Scala are different to generics in C#.
  • Informal discussions like "Developer Ethics" are perfect to be hold during lunch time. I'll propose topics for lunch time like in this conference, so that I'll know what to talk about with people I don't know but who share a common interest. Unfortunately tables were not big enough this time to join some lunch discussions.
  • I really like the metaphor of the power lifting in pairs to explain pair programming, thank you Houssam (@houssamfakih) for sharing your experiences along these years of practise.
  • I want to learn more about Event Sourcing and CQRS. This is the second conference where people recommend Event Storming and I haven't used it in a real project. It's on my to-do list.
  • Thank you Tobi (@tpflug) and Chris (@c089) for organising the Power Point Karaoke once again and the variants. Thank you Adrian, Pawel, Christian (@dertseha) & company for such hilarious nights. I plan to play the PP Karaoke on the New Year's eve with family and friends.
  • Choose the blue route in the GPS, not the red one - although Germany is beautiful.
  • 30 mins Gym + 30 mins sauna after the sessions is a must. German get naked in the sauna no matter if there are men and women mixed.

Using C# Collections

There are many ways to work with collections. We are following Microsoft Guidelines for Collections  plus some ideas that Kent Beck explains in Implementation Patterns. I've created a repository with code examples so that anyone can play with them. All the unit tests in the project are green except for two, which are red on purpose to express my surprise when I wrote them, because it's a behavior I didn't expect.

IEnumerable<T> is not dangerous itself, the surprise may come up when the variable is created via a LINQ expression (.Select, .Where ...). Then it may be the case that the query execution is deferred and the behavior of the code is totally unexpected. For this reason we try to avoid IEnumerable<T> and IQueryable<T> as the return type of functions.

Try not to expose collections as much as you can by wrapping them within objects you own which expose only the minimum domain required features. But if you have to expose the collection anyway, in our teams the convention is:

  • ReadOnlyCollection<T> is the type returned to avoid modifications to the number of elements in the collection. (Internally it's just a wrapper around IList).
  • IList<T> is returned when the collection may be modified and the order of the items in the collection is relevant.
  • ICollection<T> is returned when the collection may be modified and we don't care about the order.

Good code expresses the programmer's intent, this is why we choose from the three types above to return collections when we have to.

When it comes to parameters, it's OK to use IEnumerable<T> as a parameter because it's the most generic collection type. Although the caller could send an unresolved Linq query as an argument hidden in the form of a IEnumerable<T>, the method does not have to be defensive about it (in most cases).

Other built-in collections like List<T> are intended to be used internally, only for implementation. So it's OK for a private method to return List<T>. The type Collection<T> actually implements IList<T> so I can't find a good reason right now to use Collection<T> instead of just List<T>.