We solve problems

  • Count on us to deliver high quality tailor-made software
  • We provide training on TDD, clean code, refactoring and JavaScript
  • We are committed consultants
  • We help teams deliver value

Archive for the ‘Software Development’ Category



.Net 4.5 came out with a really handy built-in asynchronous mechanism, async and await. However the method signature of a void method is a bit strange:

  1. public async Task SomeAction(){...}
  2.  

It is possible to use async void but it's not recommended unless we are talking about event handlers. It's also a bit strange the fact that the type specified in the signature is not the one returned:

  1. public async Task<string> SomeQuery(){
  2. .....
  3. return someString;
  4. }
  5.  

But everything else is great and very handy.

Stubbing method calls can be hard though. You can get weird exceptions like System.AggregateException when running tests with NUnit. The problem comes up when awaiting for a

stubbed async method:

  1. // notice that async tests with NUnit require the method to be async Task rather than void
  2.  
  3. [Test] public async Task
  4. filter_results_from_search(){
  5. var srv = new Mock<Service>(); // using Moq
  6. sut = new Sut(srv.Object);
  7.  
  8. sut.someAction(); // Exception happens here
  9.  
  10. SomeAssertionOverHere();
  11. }
  12. // Production code
  13. public class Service(){ // dependency
  14. public virtual async Task<string> SomeQuery(){ // virtual is mandatory in order to mock it
  15. return "someResult";
  16. }
  17. }
  18. public class Sut(){ // system under test
  19. Service srv;
  20. public Sut(Service srv){
  21. this.srv = srv;
  22. }
  23. public async Task SomeAction(){
  24. var result = await srv.SomeQuery(); // NullReferenceException in here
  25. ....
  26. }
  27. }
  28.  

The problem is that Moq will make the stubbed method return null when invoked, because we are not specifying any return value. The default value for Task<string> is null. We should tell Moq to return a proper Task:

  1. [Test] public async Task
  2. filter_results_from_search(){
  3. var srv = new Mock<Service>();
  4. srv.Setup(s => s
  5. .SomeQuery())
  6. .Returns(Task.Factory.StartNew(
  7. () => "someQuery")
  8. );
  9. sut = new Sut(srv.Object);
  10.  
  11. sut.someAction();
  12.  
  13. SomeAssertionOverHere();
  14. }

The key is to return a new task: Task.Factory.StartNew(lambda)
 

Este post habla de dos de los valores de XP: Simplicidad y Feedback. Fundamentales y habitualmente olvidados.

Cuando escribimos software o dibujamos una interfaz de usuario, estamos dando forma a una solución concreta de las muchas que puede tener un problema. Al elegir una solución automaticamente descartamos las demás porque nunca damos marcha atrás para escribir otra solución distinta si la que tenemos ya “funciona”. Por esto hay que reflexionar constantemente sobre la complejidad de la solución que hemos elegido. Lo mejor es hablarlo con los compañeros, discutirlo. A menudo exponiendo la idea descubrimos que podemos conseguir lo mismo de una manera más sencilla.

La clave para poder simplificar es tener en cuenta cuál es el objetivo (qué problema fundamental estamos resolviendo) de negocio. Pero no el objetivo del software a largo plazo sino de la release actual. Lanzamos releases frecuentes en ciclos cortos para obtener feedback y cada una tiene el objetivo de aprender algo concreto del negocio o de los usuarios. Cada release ofrece alguna caracteristica nueva o un cambio que les ayude en su día a día pero sin querer abarcar demasiado de un golpe. Me gusta que la lista de features y cambios de una release me quepa en la cabeza, que pueda contar con los dedos de una mano sin necesidad de documentos. Y que haya un objetivo predominante.

Para simplificar hay que tener en mente cuál es el objetivo de la release actual, que no es el mismo que la release que haremos dentro de 3 meses. De aquí a 3 meses habremos cambiado buena parte del código, a veces mediante refactoring y a veces porque nos damos cuenta de que hay otras necesidades. La vida cambia en 3 meses y en menos.

Conviene darle más de una vuelta a la solución, hay que “resimplificar” todo lo posible. Lleva poco tiempo hacerlo y tiene un beneficio muy grande en el medio y largo plazo. "Vale ya funciona, ahora vamos a echarle una pensada para ver si lo podemos simplificar, a ver si podemos borrar la mitad del codigo y que siga funcionando".

El código tiene que quedar fácil de entender y su simplicidad es más importante que el uso de patrones. No tenemos por qué aplicar el patrón MVVM en absolutamente todas las partes de la capa de presentación, a veces su complejidad no se justifica. Por cierto, recordemos que un ViewModel es una representación de la vista y no un DTO, un ViewModel es una especie de "controller".

La soluciones deben ser lo más abiertas posibles para poder dar margen de maniobra al usuario y aprender cual es su forma preferida de trabajar. Hay que identificar limitaciones innecesarias que estemos introduciendo tanto a traves de al interfaz de usuario como a nivel del código para poder evitarlas. Una solución abierta se centra en el problema a resolver, en el “qué” y permite cambios de forma con facilidad, poder pivotar hacia otras cosas.

Por ejemplo, imaginemos que hay 3 acciones fundamentales que el usuario puede realizar en la aplicación. No hace falta que las realice en un orden concreto, el objetivo de negocio es que pueda realizar las acciones A, B y C cuando mejor le convenga. Entonces no hagamos que para ejecutar la acción A, tenga que hacer primero la B. Eso es introducir una limitación innecesaria.

Ahora un ejemplo de complejidad excesiva: Imaginemos que queremos mostrarle al comercial la disponibilidad de un determinado modelo de vehiculo, es decir, si está en stock. El objetivo de negocio es que sepa si puede vender ese modelo o no. ¿Para qué vamos a contar cuántos vehículos hay disponibles de ese modelo? Le da igual si hay 24, solo necesita saber si tiene una unidad disponible para la venta o no la tiene. Para darnos cuenta de este exceso de complejidad tenemos que recordar el objetivo de la release. Recordar el propósito de la release actual.

Cuanto menos software escribamos mejor. La linea que no falla es la que no está escrita. Cada linea que escribimos es un compromiso que hay que mantener, hasta que alguien venga y la borre. Cuanto vamos justos para cumplir con el objetivo de la release (en el mundo del software esto es siempre), debemos recortar en alcance (funcionalidad) y nunca en calidad. Hay que hacer todo lo posible por respetar los tiempos de entrega previstos, consiguiendo el objetivo a base de quitar características menos importantes en el momento actual.

DDD: Entity or Value Object?

Depending on the context certain concept may be modeled as an entity or as a value. It could be both in different bounded contexts. To me, good heuristics to choose between entities and values are life cycle and mutability. Values are immutable, represent constants. A color is a good example (thanks Alfredo). If color changes from red to blue it's not the red color anymore but something else. The cost of returning a new instance when color changes is cheap:

  1. Color yellow = red.mixWith(green);

When returning a whole new objects costs many lines of code and memory that could be a sign that it's probably better modeling as an entity. If we care about changes that could be another sign. Recently we had to model a draft invoice. From the point of view of the business a draft doesn't require a unique number, only the final invoice has to have a unique sequential number (identity). So I thought we could model the draft as a value. After all if two drafts consist of the same customer and the same lines, the business could not tell the difference between the two. We chose a value object but then every time a new line went into the draft we had to create a new one, copying all the existing lines. Too much code for such a simple operation. Finally we changed the design modeling the draft as an entity.

Questioning the design made us speak to the domain experts - “how would you search for a certain draft? If two draft contains exactly the same how do you differentiate them?”. And they decided the application should allow for just a single draft per customer. It simplified the application even though eventually we used an entity.

As the object changes, we may be interested in tracking why, who and by whom changes were made” - This is a good heuristic to think of an entity.

When the entity's identity generation is delayed until it's persisted, we might have to implement the equals and hashCode methods as if it was a value object to avoid bugs”. This kind of workarounds make the distinction fuzzy.

I currently think that choosing an entity or a value boils down to good object oriented and simple design. Value objects are older than DDD, I believe the term was coined by Ward Cunningham. Even if the object is an entity, I try to avoid getters and setters as much as I can. Rather than reading information from object properties, objects may communicate with each other by sending messages.

Things to remember from the two chapters:

  • I like the trick of Layer Supertype on page 187 to hide a surrogate identity from entity's consumers.
  • I find the idea of “role interfaces” on page 203 quite handy to expose only certain behavior to consumers.
  1.  
  2. class Customer : AddOrder, MakePreferred {
  3. }
  4.  
  5. var customer = Repository.Find<AddOrder>(customerId);
  6. customer.AddOrder(order);
  7.  
  • Values advantages: “Value types that measure, quantify or describe things are easier to create, test, use, optimize and maintain”
  • Limit a value to depend on and understand its own type and the types of its attributes.
  • Pass only values as parameters to value methods.
  • I like how he uses the Java enum to implement the State pattern on page 235 (he calls it Standard Type). Java enums are really powerful.

DDD Paradoxes

I read Eric Evans' book on DDD years ago and fall asleep with it. Didn't really took much value from it. Last year I read Vaughn Vernon's book on IDDD and I found it more interesting because of the code examples but still too abstract. Too much prose, too thick. One problem with both books is that their written English style is quite sophisticated for a non-native English speaker like me. But I believe there is a lot of value in DDD, so I've read it several times, on kindle and later on paper. My feeling with DDD is still a bit frustrating. I find many contradictions in Vernon's book. But there is a lot of valuable advise in the book I want to benefit from. I should be able to get more value from the book.

This is the first of a series of posts to myself with the idea of better study the subject, trying to summarize the heuristics that I think are going to be more useful for me and hoping the return the investment of time. I want to fix the ideas that I find good.

The following statements are not necessarily in the book literally, this is just what I have in mind and it could just be my interpretation.

Along the book it's advised not to model the domain based on infrastructure concerns, tools or frameworks. However there are sentences where I think it reads: depending on the persistence mechanism you might choose to model a concept as an entity or as value object. Aggregates are specially contradictory. “Aggregate is synonymous with transactional consistency boundary”. Well, transactionality is purely infrastructural so what's the point with aggregates? I believe the chapter on aggregates has very little to do with modeling, but it can be really useful if I can extract a list of architecture/infrastructure problems and solutions, recipes. I just need to extract a list of architecture solutions from that chapter, will do so in an upcoming post.

The distinction between Application Service and Domain Service is hard to see for me. I understand that an Application Service should not contain domain logic, however I am not sure anymore what “domain logic” is after reading some examples of Application Services. I have to dig further into it.

Choosing an Entity or a Value Object is still a hard decision. Apparently Value Objects should be used more often than they are, developers tend to abuse Entities and end up with anemic models. However my impression is that entities are used way more than values along the book. After reading the chapter on Value Objects for the first time, I thought I got it – if I could consider two objects to be equivalent when they had the same field values, they must be value objects. After working with experienced DDD developers in my team and reading the book again, I see I was wrong. I'll focus on the distinction between entities and values in upcoming posts.

This post is going to be updated as I encounter more paradoxes (from my current understanding).

2014: Un gran año

Últimas horas del 31 de Diciembre de 2014. Para millones de personas es un día de celebración. Aproximádamente el 75% de la población cambia de año cada primero de enero siguiendo el calendario gregoriano, aunque solo unos pocos podemos hacer una fiesta. Para muchos es el último día de su vida, para otros es el primero. Según la ONU, más de 30 millones se encuentran desplazados a causa de conflictos armados y muchos de ellos están en plena guerra, luchando por la supervivencia. Muchos morirán de hambre y otros por enfermedad. Miles de millones de animales sufren el holocausto en las granjas y mataderos industriales para satisfacer a una minoría de los humanos del planeta. Y el planeta según diversas organizaciones está en el peor momento de la historia.

Sin ignorar todo lo que está ocurriendo aún hay margen para el optimismo, la esperanza y la gratitud.  Mi deseo es ser cada día más optimista y más féliz sin necesidad de amnesia, ignorancia ni indiferencia. Haciendo por los demás todo lo que esté en mi mano. Consumiendo de manera responsable.

La hoja de ruta consiste en vivir el momento presente con atención, agradecido por tantas cosas buenas que me suceden.

2014 ha sido el mejor año de mi carrera profesional desde que trabajo como independiente. Todo lo que podía salir bien ha salido bien. He tenido la suerte de visitar multitud de empresas y trabajar con gente genial. He tenido tanto trabajo este año que no voy a citar todos los lugares por los que he pasado y personas con las que he trabajado porque se me quedaría gente fuera seguro.  Me ha encantado ver trabajar a algunos colegas del gremio como Luis Fraile o Luis Ruiz Pavón y ha sido un placer descubrir a Carlos Bastos.

A principios de 2013 pensaba que debía centrar mis esfuerzos en el mercado extranjero, sobre todo UK y Alemania pero este año el mercado nacional me ha dado mucho más de lo que esperaba y he salido poco fuera del país. Prácticamente cada semana recibía alguna petición para formación o consultoría y he tenido varios encargos de desarrollo de producto a medida con los que me lo he pasado bomba. He conseguido programar tanto como quería. También he podido empezar a grabar los screencasts de programación que quería (y haré más).

Pero lo más importante son las alianzas que se han ido fraguando por el camino. Sin buscarlo estamos cerrando el año funcionando como equipos en lugar de ser un profesional solitario. Estoy encantado de cerrar el 2014 trabajando con mis compañeros Luis Rovirosa, Juan M. Gómez, Imobach Martín, Nestor Bethencourt, Modesto San Juan, Alfredo Casado y Fran Reyes. Y con todos los compañeros de AIDA (ahora cuento más). 

Luis Rovirosa está ayudandome con consultoría y formación en varios clientes, es un aliado con el que llevaba tiempo queriendo colaborar. Juan, Imo y Nestor estan desarrollando conmigo el editor de plantillas del popular MDirector, la herramienta de email marketing de Antevenio, un proyecto muy bonito. Alfredo, Modesto y Fran están conmigo y con el resto de compañeros de AIDA trabajando en un proyecto apasionante para el Grupo Domingo Alonso. Y hay más proyectos en camino. Por ello estoy en proceso de constitución de un SL, la que será mi segunda SL. Estoy ya cambiando la portada de esta web (carlosble.com) y pronto habrá mas fotos e info de todos.

Con semejante grupo de profesionales podemos atender toda la demanda que me llega y puedo estar tranquilo al delegar. Poder delegar sin preocupaciones, no tiene precio.

Cerrar el año trabajando para Grupo Domingo Alonso es una suerte. Se trata de una de las mayores empresas de Canarias aunque yo no la conocía porque los coches no me llaman la atención. Surgió una consultoría con ellos, en parte a través de nuestro amigo y maestro Jose Juan Hernandez de la ULPG y pude conocerles. Me quedé encantado con la calidad humana del equipo, sus valores, su energía y con el proyecto en sí. No tenía intención de proponerles colaboración para sacar adelante el proyecto pero cuando volví a casa y pasaron unos días me dí cuenta que yo quería trabajar allí y les mandé una propuesta. Es un super reto, la construcción de un complejísimo Dealer Management System. El reto es tan grande que necesitaba pedir refuerzos y para mi sorpresa, tanto Alfredo como Modesto y Fran lo dejaron todo (bueno, a sus parejas no!) y se mudaron conmigo a Gran Canaria para meterse en el proyecto. Llevamos dos meses de proyecto y cada vez me gusta más todo lo que conozco. El capital humano de AIDA (Aplicaciones Informáticas Domingo Alonso) es increible. En la fiesta de navidad nuestro compañero Alejandro Artiles se curró este gracioso vídeo explicando lo que estamos viviendo en estos comienzos del proyecto, al estilo "Stars Wars":

Por si fuera poco, en Junio de este año mi padre volvió a nacer. Ingresó en urgencias y los medicos dijeron que no sobreviviría. Un fulminante cáncer de cólon. Sin embargo lo ha superado por completo en muy poco tiempo. Mi padre cuenta la experiencia en su blog. Ha sido la mejor noticia del año, lo mejor que ha sucedido.

Este post es sobre todo para daros mil gracias a todos los que estais haciendo posible este grandísimo momento. Gracias a todos los que este año habeis contado conmigo. Ha sido un placer y estoy seguro de que viene mucho más por delante.

Gracias a Dácil por estar siempre ahí, por ser mi motor. Gracias a mi madre por cuidar de todos nuestros animales del refugio cuando yo no puedo hacerlo por mi trabajo. Este año, sin mi madre no hubiera sido lo mismo, no sé cómo lo hubieramos hecho.

Caramba! ya es 2015! Próspero año nuevo!

Implementing BDD – part 1

Scenario review

Scenario review

This is the first of a series of posts on how our team is practicing BDD. These are mostly notes to myself and our team so other readers may not find enough context to understand the points.

 

 

 

 

 

 

After a few weeks we have decided that:

  • Although source code is written in English, the specifications are written in Spanish. Because the business is always discussed in Spanish, it's the language used to determine what we need to do. So the Cucumber scenarios are written in Spanish.
  • There is a glossary of business terms to help us document the ubiquitous language. The glossary contains the translations from Spanish to English so that the source code expresses the same concepts.
  • If developers need to change or adjust the scenarios for some reason (for instance, automation purposes), then business analyst have to validate the changes.
  • Scenarios will be stored in only place only, and that place is SpecFlow (Cucumber). This means that business analyst need to have all the tools installed and access to the source control repository (Visual Studio and TortoiseHG).
  • After the specifications workshop, scenarios are written in pairs or groups of 3 or 4 people. Two people is usually enough. We've realized that writing the scenarios in pairs help us come up with more questions and more ideas to simplify the problem.
  • The automation of the scenarios should also be done in pairs, because this is yet another opportunity for us to detect inconsistencies or doubts. Well, in general we just pair most of the time.
  • A user story is not ready for implementation until it's got at least 4 scenarios. We started with just one and the lack of context caused us to introduce wrong models in the code. Actually when test-driving a small function, we never start out with a single example but with a to-do list. Then we focus on each test writing the minimum code required but we do know where we want to go. This is the same but at a higher level.
  • Given that our business analyst know how to read code, they should read the source code at the end of the iteration. By reading the part of the code inside the hexagon (we use hexagonal architecture), they can ensure the designed model makes sense.
  • Scenarios should not repeat irrelevant details. If one scenario uses real data like a "plate number", then the following scenarios don't need to repeat the whole plate number but just something like "x". We should keep in mind that scenarios complement each other in terms of documentation. They should be indepdendent for execution but not for documentation.
  • Triangulation: we can make a scenario go green quickly. However, triangulating the code that really works for all the cases falling under that scenario, may require more code. If the triangulation does not concern business details, we use NUnit to test-drive rather than SpecFlow. Scenarios are used for documentation, not for triangulation, unless the triangulation covers relevant business tips.

It was by chance that one of our business analysts came up to a mob programming session and got to see code related to the scenarios that we had defined together. Quickly he realised that we were introducing aliens in the domain (wrong models). It was because we were lacking some other scenarios. The fact that the code was in the wall with a massive font size and it really communicated our intent, together with the fact that the reader is very clever, made us discover an useful technique to double-check our assumptions.

 

A method is an abstraction

The name of a method must have a higher level of abstraction than the body for the method to be worth it. The name should explain what the method does but not how. If the name communicates exactly the same than its code, it could be a sign that the method does not pay off and should be inlined.

Whenever we extract a method from a bunch of lines of code, we are adding a little abstraction to our code. If the extraction happens too early, then we are adding wrong abstractions to the code. Those premature abstractions get in our way to a better design, often hiding duplication and deeper design problems. Lots of small methods does not necessarily mean better design.

I extract a method only when my tests are green and it's really obvious the responsibility of those lines. When I am done implementing the branch of the solution I am test-driving and it's easy to find a name for the new method.  I don't create new methods to make a test pass. Methods appear later.

Code below is an example of premature method extraction. Right after the first test passed the developer created this method:

  1.  
  2. private boolean isValidArithmeticExpression(String input) {
  3. return input != null;
  4. }
  5.  

The body of the method is perfectly understandable without the method. There was no duplication of the statement and there were just a few lines in the production code. Definitely too risky, too early, too complex.

An example of a wrong encapsulation:

  1.  
  2. public string splitBySpace(string expression){
  3. return expression.split(" ");
  4. }
  5.  

In this example the method's name says exactly what one can read in the body.

The same applies to helper methods in the tests. This is an example of a wrong encapsulation:

  1.  
  2. private void conductCalculationAndAssertResult(String equation, int expectedResult) {
  3. int result = calculate(equation);
  4. assertEquals(expectedResult, result);
  5. }
  6.  

Implementing a fake repository

Lately I am using fake objects more than the other common kinds of doubles - stubs, spies and mocks. This is quite recent, I used to prefer stubs, spies or mocks. I find that fakes make my tests less fragile in the case of outside-in development when it comes to changes in the design. The problem is that there could be defects in the implementation of the fake objects. A tricky one may appear in the case of in-memory repositories:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. jobs.Add(job);
  6. }
  7. public void Update(Job job)
  8. {
  9. // TODO
  10. }
  11. public List FindBy(string ownerId)
  12. {
  13. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  14. }
  15. }

This implementation is misleading. If we are test-driving a service expecting it to update a Job instance, we may forget to invoke the repository's Update method and it would still work.

  1. [TestClass]
  2. public class ServiceTest {
  3. [Test]
  4. public void AssignsOwnerToJob()
  5. {
  6. var service = Factory.CreateJobServiceWithFakeRepository();
  7. var job = service.CreateJob(irrelevantTitle);
  8.  
  9. service.Assign(job, irrelevantOwnerId);
  10.  
  11. service.JobsOwnedBy(irrelevantOwnerId).Should().HaveCount(1);
  12. }
  13. }
  14.  
  15. public class Service {
  16. private JobRepository repo {get; set;}
  17.  
  18. public Service(JobRepository repo){
  19. this.repo = repo;
  20. }
  21. public Job CreateJob(string title)
  22. {
  23. var job = new Job();
  24. repo.Add(job);
  25. /*...*/
  26. return job;
  27. }
  28. public void Assign(Job job, string ownerId){
  29. /*...*/
  30. job.Assign(ownerId);
  31. /* BUG: am forgetting the call to "repo.Update(job)" right here */
  32. }
  33. public List JobsOwnedBy(string ownerId){
  34. return repo.FindBy(ownerId);
  35. }
  36. }

The problem is that the in-memory repository is using the same object references all the time. When an object is passed in as an argument, it's actually a copy of the reference. However if the repository was accessing a database it would most likely return different instances (references). A better implementation of the repository would be this one:

  1. public class InMemoryRepo : JobRepository {
  2. readonly List jobs = new List();
  3. public void Add(Job job)
  4. {
  5. /* clone: the object added to the repo
  6.   is different to the given one */
  7. jobs.Add(job.Clone());
  8. }
  9. public void Update(Job job)
  10. {
  11. /* find the given job,
  12.   remove it from the collection and
  13.   add a clone of the given one */
  14. }
  15. public List FindBy(string ownerId)
  16. {
  17. return jobs.Where(job => job.IsOwnedBy(ownerId)).ToList();
  18. }
  19. }

Conclusion: the behavior of the fake should be the same than the original when it comes to handling object references.

JavaScript for testers workshop at LTGW2014

JavaScript for testers workshop

Yesterday, Jenny Mulholland (from Softwire) and I & ran our workshop on JavaScript for Testers at the London Tester Gathering Workshops 2014, at Skills Matter.

We had a nice group of participants with a wide range of kills and knowledge and we covered (roughly) multiple aspects of testing JavaScript apps. I had fun and we were busy during the workshop which is a good sign. Please if you were in the workshop give us some feedback using the comments in this post apart from the form that SkillsMatter will send to you, it will be very appreciated and will help us improve.

These are some of the resources we used:

Big thank you to Jenny for her invaluable help and support before and during the workshop. And to Tony Bruce and SkillsMatter for organizing this great two days of networking and workshops. It's fantastic to visit London from time to time and meet so many fascinating people.

Understanding JavaScript Errors

There is no magic in programming languages. The fact that they are formal languages makes them deterministic. For every behaviour there is a reason. For every error there is a logical cause.

Before googling for an odd error message, try to understand what the interpreter means. This is the fastest way to solve the problem.

I often see developers wondering why a certain test throws an error, whilst reading several times the production code, without really understand what's the meaning of the error. Wrong approach, one must know what the interpreter is saying before searching for the cause. It may be an error in the test itself!

The same error message may be expressed in a different manner by different browsers and other JavaScript interpreters like Node. This fact emphasizes the need for understanding what are they talking about.

In order to understand JavaScript error messages one must understand
what's the meaning of "undefined". Undefined is a special value.

  1. var x; // We are declaring a varible, whose value is undefined

In this case we say that variable "x" is undefined, is the type of the variable:

  1. typeof(x); // prints out the string "undefined" (as a string!).

The value of a variable is undefined until it's assigned a concrete
value:

  1. var x = 5; // x is a Number
  2. var x = function(){}; // x is a Function
  3. var x = {}; // x is an Object
  4. var x = true; // x is a boolean
  5. var x = undefined; // x is undefined

Now let's review errors containing the word "undefined" in the message:

TypeError: undefined is not a function (Google Chrome)

The interpreter says that something used as a function, is not a function actually, it's an undefined variable. Why does the interpreter assumes it's a function? Because there are braces after a name. When there is an open parenthesis right after a name, the interpreter determines that name is a function according to the grammar of the language. It's exactly like the question mark at the end of a sentence in English, we assume the sentence must be a question because of the mark at the end. The writer of the function was probably expecting the variable to be a function at that point but given some defect, it's not. This is how to reproduce the error:

  1. var x; // x is undefined.
  2. x(); // and... undefined is not a function, is just undefined!
  3.  

TypeError: Cannot read property 'length' of undefined (Google Chrome)

The error says there is a property named "length". A property is a member of an object. The interpreter assumes that "length" must be a property of an object because there is a dot preceding it. Properties are accessed with a dot:

  1. var obj = {propA: 123, propB: 456}; // an object with 2 properties
  2. var val1 = obj.propA; // reading property (123)
  3. var val2 = obj.propB; // reading property (456)

The error message says that the variable at the left of the "length" property is undefined, that is, the object is not actually an object, it's actually the undefined value. This is the way to reproduce the error:

  1. var obj; // obj is undefined
  2. obj.length; // trying to create or reference a property in an undefined value, raises an error.

If the "property" is accessed to set it a value, the error message will be slightly different:

  1. obj.length = 777; // TypeError: Cannot set property 'length' of undefined

TypeError: Cannot call method 'trigger' of undefined (Google Chrome)

This one is similar to the previous. It says "of undefined" expresses the sense of belonging. It must be related to an object. This time it's a method, not a property. It assumes that it must be a method because there are braces after "trigger". The error means that the value of the variable at the left hand side of the dot, is undefined:

  1. var obj;
  2. obj.trigger(); // trying to reference a method in an undefined value, raises error.

Node.js produces slightly different messages for the same errors:

  1. var x; // variable x is undefined
  2. x(); // TypeError: Property 'x' of object #<Object> is not a functio

In JavaScript everything happens within a certain environment. When the variable is defined out of any function its scope is global. In the case of the browser, the global environment is the "window" object. In the case of node, it's the "global" object:

  1. var x = 5;
  2. window.x; // 5 in the browser, error in node
  3. global.x; // 5 in node, error in browse

Node makes this fact explicit in the error messages.

TypeError: Property 'x' of object #<Object> is not a function (Node.js)

As there is not dot preceeding the "x", we know it's talking about
the "global" object. As we are using braces after "x", node assumes
it must be a function. Notice that it doesn't reveal the value of "x", which is undefined, it just says that "x" is not a function. So it would raise the same error if "x" was a number.

At the time of this writing, Firefox behaves a bit similar with regards to the value of the variable:

TypeError: x is not a function (Mozilla Firefox)

  1. var x;
  2. x(); // TypeError: x is not a function
  3.  

ReferenceError: x is not defined

I find this message a bit unfortunate. I would prefer "x is not declared" or "x does not exists". Because "not defined" seems to be the same as "undefined" but it's not. This error happens when the variable has not been declared but it's value is being accessed:

  1. var x = 10 + someNameNotDeclaredEver; // ReferenceErro