Methods and processes of Software Engineering: how to create beautiful software

The OOP(S) Concepts You Need To Know

The OOP(S) Concepts You Need To Know

Object Oriented Programming (OOPS or OOPS for Object Oriented Programming System) is the most widely used programming paradigm today. While most of the popular programming languages are multi-paradigm, the support to object-oriented programming is fundamental for large projects. Of course OOP has its share of critics. Nowadays functional programming seems the new trendy approach. Still, criticism is due mostly to misusage of OOP.

This means that if you are learning to become a better programmer it’s fundamental to have a good idea of the main concepts of object-oriented programming and how they work. Maybe you are an experienced programmer, but you started right from practice, without any theoretical background. Or you simply forgot to update your knowledge while working. You may be surprised by the things you don’t actually know. Apparently it can happen to the very best of us.

So we will try to keep the right mix between theory and practice, providing a good number of examples. Our examples will be based on representing a team sport: our domain will be about players, coaches and other staff members. How do you represent that? We are going to answer that.

Class

Every player is a different person, but they all have something in common: they can perform the same actions, such as running or passing, and they share certain features, like a number and a position. The same thing could be said for coaches and the rest of the staff. Each one of them will have different combinations, but they all follow the same model.

A class is a model, a blueprint, a template that describe some features.

More precisely a class represent data, usually with variables called fields, and behaviors, represented by functions, usually called methods. For example a class Player could have a field called Role to represent its role, or position, on the actual field of the game, a Name, to represent his name. As behavior it could have a method Pass(Player), that would make him pass the ball to some other player.

Let’s see an example in pseudo-code.

Object

If a class is a model what are the actual players? They are called objects or instances of the class. In the previous example the argument teamMate was an object. To create an object from a class you instantiate, or create, an object.

For example, the object of the class Player John has the Name “Johnny John” and the Role “Attacker”.

A Black Box

A black box is something of which you can observe input and output, but you ignore how it works: you cannot look inside. This is can be a good thing, because you do not depend of what is inside the box. And you do not care if one day someone change what is inside the box, if it still behave the same as seen from the outside. Now, this is a principle that is applied in OOP and it is a good thing.

In simple terms: if you just know what something is supposed to do but not how it does it then you cannot mess it up.

The idea is to delegate all that is needed to do something of importance to a specific section of the code. So that you can change it, independently of any other, without the risk of breaking something else.

For instance, imagine that the coach has created a certain strategy: it does not need to explain to the players how to pass or how to run. It just need to tell them that them what they have to do. The players themselves must know how to actually do these things. We want to achieve the same organization in our programming.

We can achieve it with Abstraction and Encapsulation.

Let’s start from a common pseudo-code.

Abstraction

Abstraction refers to hiding the details of the implementation from the outside the class.

As example, the object OldMan of the class Coach call the method Run() of John, an object of the class Player. It does not need to know what John must do to actually run. It just need to know that an object of the class Player has the method Run().

Encapsulation

Encapsulation involve two notions: restricting access of some of the fields and method of a class from the outside world and binding together related data and methods.

For instance, to simulate the ability to run, the class Player has a method called Run(), but it also has a field called Legs. This field represent the condition of the legs of the player: how tired they are and their health, that is to say whether they are injured. The outside world does not need to know that the Player has Legs and how they operate.

So the class Player hides the field of Legs from the outside world. Since to perform Run() it just need to operate on Legs, by doing so it guarantee that the individual object can be completely autonomous from external interference. This is useful if you later want to add to the simulation the effects of different shoes. You just need to modify the class Player, and nothing else.

Inheritance

To solve a problem you usually end up creating classes which are related somehow. They share some characteristics or even some behaviors. Since you want to avoid repetition, and thus errors, you want to collect all these common features in a common class. Usually this class is called parent, super or base class. Once you have created this class the other classes can declare to be like it, or to inherit from it. This is called inheritance.

The end result is that each of the classes that inherit from the parent class can also have the methods and fields of the parent class, in addition to their own.

As example, you notice that the Player, Coach and Staff classes have a name and a salary, so you create a parent class called Person and you made them inherit from it. Notice that you just keep creating only an object of the class Player, Coach, etc. you don’t need to explicitly create an object of the class Person.

In some languages you can explicitly forbid from creating a class Person by marking it as an abstract class. In such cases a class that you can actually instantiate is called a concrete class.

Most object-oriented languages support inheritance, some also support multiple inheritance: a class can inherit from multiple classes. This is not always possible, because it create problems and adds complexity. A typical problem is deciding what to do when two different parent classes has a method with the same signature.

In common parlance inheritance defines a relationship between two classes is(-a-type-of)-a. In our example a Player is(-a-type-of)-a Person.

Interface

Interface, also known as protocol, is an alternative to inheritance for two unrelated classes to communicate with each other. An interface defines methods and (often, but not always) values. Every class that implement the interface must provide a concrete method for each method of the interface.

For example, you want to simulate ejection, or dismissal. Since only players and coaches can be ejected from the field you cannot make it a method of the parent class that represent people. So you create an interface Ejectable with a method Ejection() and make Player and Coach implement it.

There is not a standard way of describing the relationship that an interface establish, but you can thought it as behave-as.  In our example a Player behave-as Ejectable.

Association, Aggregation, Composition

Inheritance and interface apply to classes, but there are possible ways to link two, or more, different objects. These ways can be thought in order of looseness of the relationship: association, aggregation and composition.

Association

An association simply describes any kind of working relation. The two object are instances of completely unrelated classes, and none of the object control the lifecycle of the other one. They just collaborate to accomplish their own goals.

Imagine that you want to add the effect of the audience on the players, in real life the audience is made of people, but in our simulation they are not children of the class Person. You simply want to make that if the object HomePeople of the class Audience is cheering then John is playing better. So HomePeople can affect the behavior of John, but neither HomePeople nor John can control the lifecycle of the other.

Aggregation

An aggregation describes a relationship in which one object belongs to another object, but they are still potentially independent. The first object does not control the lifecycle of the second.

In a team sport all objects of class Player belong to an object of Team, but they don’t die just because they are fired. They could be unemployed for a while or change Team.

This kind of relationship is usually described as has-a (or is-part-of), or on the inverse belongs-to. In our example the object Winners of Team has-a John of Player, or on the inverse John belongs-to Winners.

Composition

A composition describes a relationship in which one object completely controls another object that has not an independent lifecycle.

Imagine that we want to add stadiums, or arenas, to our simulation. We decide that an object Arena cannot exist outside of a Team, they are owned by a Team which decides their destiny. Of course, in real life an arena doesn’t magically disappear as soon as a team decide to dismiss it. But since we want to simulate only team sports, for our purpose it is out of the game as soon as it stops being owned by a team.

Compositions are described just like aggregations, so pay attention to not confuse the two.

Polymorphism

Polymorphism, in the context of OOP, means that you can invoke the same operation on objects of different classes and they will all perform it in their own way. This is different, and should not be confused with a programming concept that is independent of OOP: function (method) overloading. Overloading applies to functions and allows you to define functions with the same name that operate on different and unrelated types. For example, you can make two methods add(): one that can add integer numbers and another one that add real numbers.

Polymorphism in a class usually means that a method can operates on different objects of related classes. It can behave differently on different objects, but it does not have to. At the most basic level an object of a child class, one that inherit from a parent, can be used when you can use an object of the parent class. For example, you can make a function Interview(Person) that simulate an interview of a Player, Coach or Staff. No matter the actual object. Obviously for this to work Interview can only act upon fields that are present in the Person class.

In practice this means that an object of a child class is also an object of the parent class.

In some cases a child class can redefine a method of parent class of the same name, this is called overriding. In this situation whenever this method is called the actual method executed is the one of the child class. This is especially useful when you need all the child classes to have a certain behavior, but there is no way to define a generic one. For example you want all object of the class Person to retire, but a Player must retire after a certain age, while a Coach or Staff does not.

Delegation And Open Recursion

In object-oriented programming, delegation refers to evaluating a member of one object (the receiver) in the context of another, original object (the sender) – from Wikipedia

The concept is used extensively in a particular style of object-oriented programming called Prototype-based programming. One widespread language that uses it is JavaScript. The basic idea is to not have classes, but only objects. So you do not instantiate objects from a class, but you clone them from a generic one and then modify its prototype to suit your needs.

Despite being at the core of the most known programming language it is little known. Indeed even JavaScript it is mostly used in the usual style of object-oriented programming. While it may seem arcane knowledge it is important to know it because it is widely used with a special variable or keyword called this or self.

Most object-oriented programming language support it and it allows to refer to the specific object (not the class) that will be instantiated, from inside a method defined in the class or in any child class. The fact that when the code will run this will refer to a specific object is what allows open recursion. Which means that a base class can define a method that uses this to refer to one of its methods, that the actual object will use to refer to a child method with the same signature.

This sounds complicated, but it is not. Imagine the following pseudo-code.

On the last line is where open recursion happens, because the method DoesHaveToRetire is a method defined in the parent class that uses this.IsOld() (on line 16). But the IsOld method that is actually called at runtime is the one defined in the child class Player.

This is also delegation, because on line 16 the this is evaluated in the context of the object John, evaluated as object of the Player class, and not the original this of John, as object of the Person class. Because remember that John is both an object of the class Player and its parent class Person.

Symptoms Of Bad Design

Up until now we have talked about the basics. We think that some people need more to apply this knowledge fruitfully. First, we need to look at the symptoms of bad design, so you can detect them for your code.

Rigidity

The software is hard to change, even for little things. Every modification requires cascading changes that take weeks to apply. The developers themselves have no idea what will happen and what will have to be changed when they need to do X or Y. This lead to reluctance and fear to change in both the developers and the management. And that slowly makes the code very hard to maintain.

Fragility

The software breaks in unexpected ways for every change. This is a related problem to rigidity, but it’s different because there is not a sequence of modification that continues to become longer by the hour. Everything seems to work, but when you think you are ready to ship the code, a test, or even worse, a customer, tells you that something else does not work anymore. The new thing works, but another one is broken. Every fix is actually two new problems. This lead to existential dread in the developer that feels like it has lost control the software.

Unportability

Every module works, but only in the careful situation it has been placed in. You cannot reuse the code in another project, because there are  too many little things that you will have to change. The program seems to work by dark magic. There is no design, only hacks. Every time you modify something you know what to do, but it is always a terrible thing that makes you afraid that it will come back to bite you. And it will. Apart from shame, that you should really feel, it makes the code hard to reuse. Instead you will recreate a slightly different code, that almost do the same thing.

Principles Of Good Design

To know what a problem looks like, it is not enough. We also need to know practical design principles, to be able to avoid creating a bad design in the first place. This well-known principles are the fruits of many years of experience and are known by their acronym: SOLID.

Single Responsibility

There should never be more than one reason for a class to change1

Usually reason to change is modified in “responsibility”, hence the name of the principle, but this is the original formulation. In this context a responsibility, or reason to change, depends on the particular project and its requirements. It obviously does not mean that a class should only have one method. It means that, when you describe what it does, you say that it only does this one thing. If you violate this principle different responsibilities become coupled and you might have to change the same class, or multiple classes, for many disparate reasons.

Let’s go back to our example. You need to keep the score of the game. You might be tempted to use the Player class, after all, players do the scoring. But if you do that, every time you need to know the score you would also need to interact with the Player. And what you would do, if you need to invalidate a point? Keep one job for each class.

Open/Closed

A module should be open for extension, but closed for modification2

In this context a module means a class, or a group of classes, that take care of one objective of the software. This principle means that you must be able to add support for new items without having to change the code of the module itself. For example, you must be able to add a new kind of player (eg. a keeper) without changing the Player class.

This allows a developer to support new things, that perform the same functions of the ones that you already have, without having to make a “special case” for that.

Liskov Substitution

Subclasses must be usable as their base classes.2

Note that this is not the original formulation, because that one it is too mathematical. This is such an important principle that it has been incorporated in the design of object oriented language themselves. But the language itself guarantees only part of the principle, the formal part. Technically you can always use an object of the subclass as if it were an object of the base class, but what it matters to us it is also the practical usage.

This means that you should not modify substantially the behaviour of the subclass, even when you override a method. For example, if you are making a race car game, you cannot make a subclass for a car that moves underwater. If you do that the Move() method will behave differently from the base class. And a few months later there will be a weird bug of random cars taking the gulf stream as an highway.

Interface Segregation

Many client specific interfaces are better than one general purpose interface2

In this context “client specific” means specific for each type of client and not for each and every client class. This principle says that you should not implement one generic interface for clients that really do very different things. That is because this couples each type of client to each other. If you modify one type of client you have to modify the general interface and maybe you also have to modify the other clients. You can recognize a violation of this principle if you have several methods on the interface that are specific to one client. This could be both in the sense that they do usual things in a different, specific, way and also that they do things that are needed only by one client.

In practical use this is probably the more difficult to respect. Especially because at the beginning it is easy to miss what are actually different requirements. For example, imagine that you have to deal with connections: a cable connection, a mobile connection, etc. They are all the same, isn’t it?

Well, in theory they behave the same way, but in practice they may differ. Mobile connections are usually costlier and they have strict limitation for the size of the monthly transferred data. So a megabyte for a mobile connection it is more precious than one for a cable one. As consequence you might find yourself checking the data less often or using a SendBasicData() instead of SendData()… Unless you already have experience in the specific field you are working on you may end up to keep forcing yourself to follow this principle.

Dependency Inversion

Depend upon Abstractions. Do not depend upon concretions2

At this point it should be clear that by “abstraction” it is meant interfaces and abstract classes. And it should also be clear why it is true. An abstraction, by definition, deals with the general case. By using abstraction you make easier to add new features or to support new items, like different databases or different programs.

Of course you should not always use interfaces for everything lest two classes touch each other. But if there is a need to use an abstraction  you also cannot let the abstraction leak. You cannot demand that the client of the interface it is aware that it must use an interface in a certain way. If you find yourself checking if an interface actually represent a certain class and then you do Y instead of X, you have a problem.

Conclusions

We have seen the basic OOP(s) concepts that you need to know to be a good programmer. We have seen the fundamental design principles that must guide how you develop software. If you are just starting out these might seem a bit abstract, but like all specialized knowledge they are needed to communicate well with your colleagues. Both with your voice and with your code. They are fundamental to organize your knowledge and practice of Object Oriented Programming so that you can create code that is easily understandable by other people.

Now that you have a solid foundation you can move forward. And overall you can finally participate in the wonderful polemics of the programming world. I suggest you start with Composition or inheritance?.

 

Using the Redmine API to create a page where to quickly add and edit tasks

Recently I have been looking for the right issue tracker for my needs and I compared a few tools including Jira, Trello, Asana and Redmine. You can read about it here.

Redmine was almost good enough but I wanted to be able to quickly add and edit tasks. Installing plugins for Redmine seem painful so I used the Redmine API instead. Basically, I can run a separate web application which interacts with my Redmine installation.

The code is available on GitHub: https://github.com/ftomassetti/redmine-reactive

Using Redmine API to interact with a Redmine installation

What I want to do is to use the Redmine API to build a new HTML page where I can show the display the data I have in Redmine. Redmine offers REST APIs over XML and JSONP.

Now, if you want just to read information from a Redmine installation you can do that through JavaScript even if you JavaScript is served on a different domain. So you can have a simple HTML file with some JavaScript and open that local file to get a custom view of a Redmine installation installed elsewhere, for example on your server.

If you want instead to write changes into Redmine you need to do PUT and POST calls. You cannot do them through JavaScript from a different domain. So the workaround is to build a tiny Python web application that do the PUT and POST calls to your Redmine application. You local application will call with the local Python webserver and let it forward the request to Redmine. Now, this sounds stupid to me but this is a consequence of the mechanism preventing cross-domain scripts.

First: getting a list of projects

As first thing I need to get a list of projects from Redmine. To do that we just need to a few lines of Javascript on our API Key. The API Key can be found in you profile. The JS code is displayed below

We display the list of projects in a combobox, just like this:

projects

Second: getting a list of issues

Now, every time someone select a project we want to load the list of corresponding issues in a table. So we add this code:

And this is the code to actually load the issues:

Ok, that is it, you can now see the issues of your project in a separate page!

This is what it looks like. We did not see how to add the priority but the code is available on GitHub.

list_issues

Adding issues

We have seen at the beginning that we need to go through a server to do write requests to Redmine. To do that I have created a very simple flask application in Python. I also took advantage of the python redmine library.

To create a new issue we need this code on the server:

And we need to call it from JS when we click on the “Add issue” button:

Fairly easy, isn’t it?

Conclusions

After that I have added the possibility to edit the subject of issues and assign the priority from the list.

In practice I can use it like this.

So, if you cannot find the perfect project management tool do not desperate: you can improve the existing ones by adding the views you need.

 

On the quest for the right project management tool: Jira, Trello, Asana, Redmine

I have used many different project management tools but I have to say that I have not yet found one which really makes me 100% happy to use it.

I have played with many tools and used some of them extensively and I was growing a bit frustrated about this. In the good old times I have also hacked a couple of tools myself. Now that I am older (and maybe wiser) I decided to sit down and thing about this problem.

Activities involving a project management tool

First of all I tried to examine for which activities I feel I need a project management tool. Here it is the list:

  1. Planning sessions: I brainstorm ideas, review existing tickets and plan work for the week while talking over Skype with other people. To be efficient during planning sessions I need an easy way to identify tickets (“ok, so I am going to close X-123, right?”, “Y-34 is urgent”) and it helps to have the possibility to quickly create tasks, without having to go through one dialog box (or more) for each new ticket.
  2. Scheduling sessions: at the beginning of the day or the week, alone or talking with others, I want to take a look at my open tickets and decide in which order I am going to work on them. I want to be able to assign due dates and priority levels. Ideally I would like also to be able to order the tickets.
  3. Coding sessions: while coding I want to open my tickets and. add comments. Possibly I want to assign the ticket to someone else if I need his comments to move on. I need an identifier to be used in branches and commits. It helps also to be able to quickly mark the ticket as done and see on what else I need to work. During the coding sessions I should not do any scheduling, that should be done upfront.

Features I am looking for

The right project management tool is partially a question of feeling: you want something with an easy to use interface, sometimes responsive and intuitive. However I tried to identify a short list of features that matter to me:

  1. (Critical) Issues should have a short ID, something that can be used when discussing about them and to tag commits. It may seem a minor thing but for me this basic feature is fundamental
  2. It should be possible to quickly create a list of tasks, from one single screen. This is important during planning sessions to be able to sketch a list of tasks without having to go through several pages.
  3. It should be possible to specify due dates. It should be also possible to sort tasks by that due date and in general to see which tasks are about to expire.
  4. It should be possible to specify priority levels. Not all tasks are created equal. It is true that ordering issues is a partial substitute for that but I think that the possibility to assign priority levels help when planning.
  5. The number of fields of an issue should be limited. I do not want to have to find the useful information in a sea of useless fields. And I do not want absolutely to have to fill fields which are irrelevant to me.
  6. It should be possible to assign tickets to project members. Basic stuff
  7. It should be possible to comment tickets. Other really basic stuff
  8. (Optional) It should be possible to find all the open issues for a project. Creating issues is part of process but querying, filtering and examine issues list is another very important activity.
  9. (Optional) It should be possible to order issues. I would like to define an order for issues, so that when I am working I can just pick the issue on top and keep crunching issues and move forward.
  10. (Optional) It should be easy to get a list of closed tasks

Now let’s take a look at a few candidates.

Jira

I have used Jira while working at several companies.

One thing that can be said about Jira is that it is complete.

Another thing that can be said is that it is bloated.

Did you ever have the feeling Jira asks you to go through screens which are completely irrelevant?  Have you ever had the feeling there are just too many fields?

Now, it is true that it can be heavily configured but this seems not to be a trivial task. I have seen several people who have used Jira for years being regularly confused by it and wasting a lot of time finding the screens which they did not use daily. When I was working at a certain company (in Ireland) we had to ask our Jira administrator (in Germany) to configure Jira for us. I think this was a very clever way to add even more red tape. Useless to say: we wasted several days just to have Jira configured for each project. No good.

Trello

Now, Trello is fun to use and very good to get started. I am using it and I have found a plugin to show the short-ID of cards. It is not perfect: you need to refresh the page after inserting the card: it is a bit annoying but not a deal-breaker.

It is very easy and fun to quickly add a few cards and there is no red-tape at all. It is absolutely a pleasure to use while planning. It becomes less nice when you need to examine what you have done in previous weeks. Do you archive the lists of tasks done last week? Do you keep a huge list of “Done” tasks?

Also, you cannot add priorities to tasks and while you can insert due dates I could not find a way to easily see tasks which are about to expire or to get a notification that a task is expired. I just feel that the tool is not helping while managing, everything is up to my goodwill and attention.

Redmine

Redmine is the open-source competitor. The default interface seems reasonable but a bit ugly. However you can add a lot of plugins and the easyredmine large plugin seems to produce a very nice product. It remains a bit slow and sluggish but I think it has all the feature I would be looking for.

The only hosting I found for redmine starts at 29 Euro per month (and only for the annual billing).

Asana

Asana is to me something in between Trello and Jira. It seems good but in my opinion it fails to deliver. It is very easy to create a list of tasks and you can assign due dates. However it feels not very structured. It seems unstructured like Trello with an interface similar to Jira.

Wrike and Azendoo seem similar to Asana.

While these tools are interesting they are not contendant to me because they miss simple IDs for items. Some of them have long UUIDs associated to tasks, but this is not something I could use to mark commits or discuss about tasks. I tried doing without IDs but it just complicates things. I also tried assigning IDs manually, writing them in the title of the issue but this is rather awkward and you can end up having duplicates.

Comparison

There are other contenders like Taiga (it seems very rigid to me) and Basecamp (I did not spend too much tome on it). For now I have focused on Jira, Trello and Redmine. Asana, Wrike and Azendoo have been excluded because they do not assign human readable IDs to task.

FeatureJiraTrelloRedmine
Feature 1: IDsYesYesYes
Feature 2: quick tasks creationNoYesYes
Feature 3: due datesYesPartialYes
Feature 4: priority levelsYesNoYes
Feature 5: few fieldsNoYesYes
Feature 6: assign ticketsYesYesYes
Feature 7: comment ticketsYesYesYes
Feature 8: filter ticketsYesNoYes
Feature 9: order ticketsPartialYesNo
Feature 10: historyYesNoYes

My totals are:

  • Jira: 7.5
  • Trello: 6.5
  • Redmine: 9

When I say Redmine I actually mean Easyredmine or Redmine with some plugins like AgileDwarf.

Conclusions

Jira is not perfect and sometimes I hate it. I think everyone hates it a little bit. However it is something that works. It has a good hosting offer and it has a nice REST Api which I could use to implement quick creation of issues for planning sessions. It has also a CLI based on that REST API (this CLI comes with an additional plugin which is not free of charge).

I like also Redmine and I think that with a few additions I could get something rather good. What I do not like is the lack of hosting options.

I plan to focus on Jira and Redmine and see what kind of workarounds I can find for their problems. After that I will need to think about the deployment: use something from the cloud? Host it on my servers?

And you? What tools do you use to stay on top of your activities?

The 5 things a developer expects from a Project Manager: how a Project Manager can help developers becoming much more productive

project_managers_for_developers_productivity

How project managers can help developers being productive

To be effective as a software developer technical excellence is not enough. On top of that there are several other aspects on which a great professional should focus. Near the top of my list there is the ability to interact with other people involved in the project. Whatever is the nature of your project you will need to interact to get things done:

  • as an open-source contributor you have to collaborate reviewing patches or having your patches reviewed, you have to address the issues brought up by users, you want to communicate your features to new users, planning with other committers or co-maintainers
  • as a freelance you have to interact with the current customers and with potential ones. You have also to interact with other developers or designers or testers involved in the projects and you need to communicate clearly who is responsible for what
  • when working in a company you have to coordinate with the other developers, in your team and in other teams, to communicate with your manager and most importantly to interact with project managers

Developers and PMs… not always love at first sight

Relationships with Project Managers can be a bit controversial from time to time: we as developers are very prone to complain about them. After all they are the ones bothering us about some change that need to go in at 6 PM on a Friday. Or the ones who keep pushing for features which do not make sense to us.

However I think that Project Managers play a fundamental role in successful team. And as a developer I can be successful only if the team is successful. For this reason I think that having a great relationship with Project Managers is they key for delivering results. I was lucky enough to work with amazing Project Managers that really helped me. This was especially true when I was at TripAdvisor: there I met PMs who were absolutely great. That does not mean I did not complain about them with my fellow developers from time to time 🙂

It does mean however that I understood that if we were going in the right direction, if we were delivering features at an high speed and if we were able to coordinate our projects with the rest of the company it was because of their work.

So I am very convinced that PMs can make a difference. But this difference can be either atrociously negative or wonderfully positive. I am not saying I understand all the responsibilities of a PM and I am sure there is a lot of things they do without interacting with developers like me. I am just thinking specifically to how PMs interact with developers and what kind of expectations I have as a developer towards PMs.

From my point of view a Project manager could make the life of developers easier by doing these five things.

1) Communicate business priorities and consider technical priorities

We are all over-worked and we all have crazy piles of work who someone expects us to do within the week. As a developer I need to be able to evaluate the effort needed to complete each task and the relations between tasks. Perhaps a certain refactoring will simplify developing a certain feature, so it makes sense to order those tasks accordingly. A certain task could take two weeks, while implementing these three other features could require just half a day for each of them. So I could prefer to work on them first.

But technical aspects are just half of the picture: ordering tasks require us to be aware of the priorities business have. What is most important for the customer? Which features can have an immediate impact on our revenues? That is very important for us to know to decide where to spend our energy and what to deliver first. I think PMs need to discuss frequently with developers what are the priorities and understand that business priorities and technical priorities need both to be considered to decide on what we are going to work next.

And sometimes PMs technical priorities are important too: they cannot always be ignored to consider only business priorities, because doing so it is going to affect our ability to deliver software and consequently the business.

2) Let the developers know about deadlines well in advance

Did ever happen to you to find out that something is needed… later today? Or that the customer was promised to receive a new build yesterday? Those are not nice surprises. Let’s be clear: sh*t happens and require us to deal with it. The application goes down and the company loses money every minute: you stop whatever you are doing and you fix it. A new bug is found and it is bad, a security concern needs to addressed as soon as possible. In real life there are things that we cannot plan for. We can just react to them.

However this cannot be the case for everything, we cannot always be doing Emergency-driven development. This is just bad practice. Deadlines need to be agreed upon and communicated, so that they can be planned for. As developers we frequently do not see the whole picture but the same goes for PMs: there are technical aspects that they could be ignoring and that could make not possible to meet a deadline, without knowing far in advance. So please dear PMs: let us know about deadlines as soon as you know them.

Caveat: when I say “let the developers know about deadlines” I mean the real deadlines. One of the worst thing a PM could ever do is to present some fake, self-imposed deadline. Some PMs came up with the idea of assuring themselves a time buffer by telling the devleopers that the customer expects the delivery to happen on the 1st when they actually agreed on the 15th. Maybe they do that because we frequently deliver things late but… guess what? Sooner or later developers will find out and think about all the useless stress or the long hours they went through because of your lies. How do you think they are going to react?

3) Manage communications

I know it could sound not plausible but developers tend to have a few defects. One of them is that developers tend to communicate… differently. They tend to be direct and blunt. And that works great with machines, a bit less with customers. Yes, it is possible that the SDK the customer given to you seems… suboptimal. Yes, it could seems a perfect example of what a bunch of monkeys could do if we would give them a box of cheap whiskey and a manual of all the possible bad practices in software engineering. Well, still it is not a good idea to let the customer knows about that. It is much better to let the PM rephrase that for us.

And I love when the PM chase the customer about that decision we need him to make. Or talk with the PM of another team to convince them to answer to the requests from our team. Yes, we really needed an answer, all the three times we forwarded again the same requests that was ignored for the last couple of weeks.

A fantastic PM will provide us with all the information we need to do a our job and ensure communication run smoothly with all the parties involved. We probably will not even realize the effort he had to put on this.

4) Shield developers from issues

Life in a company is stressful. Stress comes from different sources and it needs to be managed. As Software Developers we deal with a lot of stress coming from technical challenges: a bug which is very difficult to reproduce, an intermittent issue depending on threads synchronization, the new release of a framework which silently breaks everything, an unreliable piece of infrastructure that makes some integration test to fail. We have plenty of reasons to be stressed.

And PMs have too: they deal more than us with the internal politics, they take part in discussions about what features to develop and which not, they could fight to get resources for the team. They could compete with other teams, they have customers screaming at them. Well I am sorry for that but if PMs start to push all their stress on the developers, then we, the developers, will end up being between the anvil and the hammer. We would be both constrained by the reality of technical difficulties and the pink-pony-crazyness of customers and politics. That is just too much so let’s have a deal: we stress with technical stuff and that is our burden. All the rest, I am sorry for you PMs, but this is for you to deal with.

5) Make sure we work on relevant projects

It could happen to work very, very hard on building something that has a very limited impact on the company product. While it could be a satisfaction anyway for someone who loves to build cool stuff in the long run this is going to be detrimental for your career. It does not help you getting a promotion if you keep working only on irrelevant stuff. Working on something that can really make a difference for the business have several advantages instead: it gives you additional motivation, the possibility to get noticed and possibly access to more resources and more support from the organization.

Working on irrelevant features is not yet the worst thing that could happen. It could happen to work on projects that are discarded before or immeditaly after they are completed. Imagine put your passion and sweat on something that is just thrown away. Not a nice feeling, eh? So it is better to work with PMs who do not put you in such situations.

In the end

I think Project Managers intercept many problems before we can even see them. They are our interface with the customers and with the rest of the organization. They ensure we are working on something which will provide value to our users. They keep track of the whole picture so that we can have the luxury to just focus on the next feature to implement, test it and ship it.

And it is a fact that developers take PMs for granted most of the time. I think it is pretty common for developers to underestimate PMs. We often just do not understand all their responsibilities. But trust me, if you work with a great PM and a… not so great one, you will definitely notice the difference.

I hope both us developers and project managers can benefit from a better relationship. I can tell you it is possible to do so: I live with a PM who happens to be my girlfriend 🙂

An Erd web server: generation Entity Diagrams from a textual description with Haskell

Last week I started discussing how I am working on improving my approach to diagrams generation, so that it could become acceptable for a Software Engineer.

Currently I write a textual description of my entity-diagrams and I generate the corresponding images using Erd. Erd is a fantastic tool written in Haskell which parses its own DSL and invoke graphviz, producing images in several different formats. To use it you need to install Graphviz, the Haskell platform and cabal. This is not always straightforward, so I wanted to create a web interface for this program. This would offer a few benefits:

  1. I could then use Erd from wherever I want, without the need to install the toolchain

  2. I could later add Syntax highlighting and GitHub integration

  3. I would have an excuse good enough to play with Haskell before totally forgetting the few things I have learned

The basic idea is pretty simple: the web application will present an editor and when the user clicks on a button the code is sent to the server. The server process the code and generate an image which is sent back to the browser and displayed in the page. It should be enough for the MVP (Minimum Viable Product: now that the enterprise world has taken my soul I have to use these acronyms, right?).

I wrote this piece of software named Erd web-server and it is available on GitHub: https://github.com/ftomassetti/erd-web-server. In the rest of this post I describe its implementation.

Interfacing with Erd

Erd is a nice program written in Haskell. Now, ideally my project would just add Erd to the dependencies and everything should be easy and fine. Unfortunately it is not possible because Erd is a cabal application module (Cabal is the package manager for Haskell). It means that the moduled cannot be used as a dependency for other modules. So ideally I would like Erd to expose a library and, as a separate project, a console utility wrapping that library. I opened an issue in the Erd project and I hope to able to send a pull-request in the future. For now I just copied the code from Erd in my project. Basically I removed the Main function and I changed one single method (loadER) to accept a Text instance instead of a file handle. In this way I can receive the code in an http request and process it in memory, without the necessity of dumping it to a file.

Overview of the routes available

The Erd web server just present a web page where users can edit the description of the ER. Then the user can edit a button and that cause a post request to be sent at /generate with the code of the ER. If things go well an image is generated and it can be retrieved under /generated.

In addition to that we need to serve also assets (Javascript and CSS)

The viewIndex and generate functions

Serving the index is easy, we just read an HTML and we return it. In the future we could use some template system if we need to have some variable value.

The generate function contains the most interesting part of the code. It operates in the Snap monad from which we can access the request and prepare the response. We have to map IO operations using liftIO which basically translate IO X values in Snap X values.

As first thing we get the body of the request (getRequestBody) and we pass it to processErCode. processErCode could either succeed (if the ER code is correct) or fail:

  • if processErCode fails it return a Left String value, containing a description of the error. In this case we produce a response containing a JSON object with the field error, where we insert the error message
  •  if processErCode succeeds it return a Right ByteString value, containing the bytes of the actual image generated. Ideally we would like to return those bytes in the answer, for the MVP we just dump them to a file which we save locally and we return the name of the file. Note that we generate a random name for the file. This is not ideal and it require to clean up the directory of the generated files from time to time. It is also possible that someone overrides your file accidentally. In theory this solution is very, very poor. In practice it is super simple and it works well enough for my use case 🙂

Serving assets and generated diagrams

We just serve the contents of two directories:

  • assets here we store javascript and CSS files
  • generated this directory contains the diagrams generated at each POST request on /generate

Javascript code

Finally we need to some work on the client side, basically sending a post request to the server and process the result. It means either:

  • showing the error message in the status box
  • loading the image from the given URL

For implementing the call to the server I have used the promise.js library.

 

Screenshots

And now it is show time! From the following image you can notice two things:

  1. the application has a very simple interface: just the editor, the _Generate Diagram_ button, the generated image and a status label saying “Success!” in this case
  2. I am not the most fancy designer out there 🙂

 Screenshot from 2015-04-19 08:58:13

Final thoughts

There are many ways we could improve this application:

  • Update the image automatically when the user does not type for a few seconds

  • Report errors inline

  • Syntax highlighting and auto-completion for the editor

On the other hand I am really satisfied with the current result: with few lines of Haskell and Javascript we created an application that help me when working on those fancy diagrams, giving me more time to think about the concepts I want to represent and spending less time remembering the options to generate the diagram or installing tools.

An approach to UML diagrams and ER models bearable for a Software Engineer

As part of my current job at Groupon I have to create diagrams, those nice pictures which make project managers happy. I write basic UML diagrams (State diagrams and Activity diagrams) together with Entity-Relationship diagrams (yes, the ones for the DB).

FOyn3i8m34Ltd-AhUo_G0Q5I2R4XLHMpavesearAueAuFGa3a_Nz_9_aOrAEkgyBaJfT1DKrPsVTnbuJQlJotCLRGUTuYhnMH6mrH0n98fcm-v7Z1zLD3Cx3fGAdCcbaPSCfzrgYSelwK00Qd1Pd7mWUQJUhKwAophHhCqJFBu7EWcB07_uK3Vevl663lxkuiheisNWIegFuCN_n1G00

687474703a2f2f6275726e7473757368692e6e65742f73747566662f6572642d6578616d706c652d73696d706c652e706e67

Yes, people want these pictures and I have to create them

What is wrong with the previous process

I am a Software Engineer and I understand the importance of communication, therefore I understand how useful diagrams can be. However, I have to confess that I am always a bit suspicious when I interact with people too fond of them: I am always afraid of dealing with people who like to spend endless time discussing about things, pretending to be able to build something and generally just wasting time. I am an Engineer, I like building things, not just talking about building things.

On the other end system evolves and diagrams can easily end up being outdated. One thing that make this problem harder to fix is that normally you need some specific tool to create diagram, so when you need to update a diagram you have to install the right tool, start it, generate your image and update the document.

I do not like this particular process, I would really love to improve it.

My current process

I prefer text formats instead of nice WYSIWYG editors, that is because:

  • text is portable, while each WYSIWYG editors tend to have their own format
  • you can easily compare and merge text files
  • you do not waste endless time trying to convince the editor to do what you want or exploring all the menu items

So if I need to write diagrams I use text formats to describe them and then I generate the actual pictures from those text files. I feel the process is more controllable, repeatable and versionable and the other people get their pretty pictures.

Currently I am using plantuml for UML diagrams and erd for ER diagrams. Erd wins extra points because it is written in Haskell. There is also a nice website that offer a web editor for PlantUML: it is named PlantText.

Now, this solution has problems:

  • you need anyway to install software, at least for the ER diagrams (you can generate the UML diagrams using the planttext website).
  • there are not nice editors supporting the DSLs used to describe these diagrams
  • there is not integration between the web editor and my github repository
  • you need to update the images in the documents after having generated them

The ideal process

To solve the current process I would love to have a web application to edit the diagrams and have this web application able to talk with my GitHub repository, doing the versioning for me. I would like also this web applications to generate the images on the fly and my documents to support links to images exposed on the web. That would be great for two reasons:

  1. I would not have to update all the documents containing a diagram when the diagram change. The problem is that many documents take a copy of the image, not a reference. The other problem is that the server with diagrams need to be always up.
  2. I would know where to find the source of a diagram. I imagine for example that we could have an image available at, let’s say, http://diagrams.foo.com/diagram1.png and the web application to edit it at http://diagrams.foo.com/diagram1.png/edit.

It would be fantastic to have a process to commit changes and a git hook to generate the images, maybe even updating the existing documents.

What I started doing: syntax highlighting for PlantUML

Now, I am still far away to have the ideal process in place and probably I will never be there: the effort for implementing it and the changes required to the current process would not justify it. However I am starting doing some steps in that direction. In particular I am focusing on improving the web editor for UML by implementing syntax highlighting.

I have implemented Syntax Highlighting for a large part of PlantUML for the CodeMirror web editor. The code is available on GitHub and I have sent a pull request to plantuml-server.

Writing the Syntax Highlighting mode could resemble writing a grammar, in fact my first thought was writing the grammar for ANTLR and then implement an automatic conversion from EBNF to a CodeMirror mode. However the goal of a grammar and Syntax Highlighting systems are different. The former is intended to parse correct files and stop when it finds errors (only very good grammars have strong error handling and are able to overcome a few errors) while a system for syntax highlighting works on a document that is wrong all the time: as you type the document is incorrect, only when you complete your statement the document is correct until you start typing the next character and the document is wrong again. Syntax highlighting system need to be very robust and tolerate a lot of errors.

This is a random piece of the mode I have defined.

Now, the basic idea is that you have a state machine where your states start from start and go through things like class def or stereotype style. Depending on the state you interpret tokens differently. Now, the point is that you should keep the number of states very limited. Remember, you want your Syntax Highlighting system to be robust and to provide some reasonable output as the user type. So your parser would not be as refined as the parser you would write for a compiler. You will end up instead having a few states, so few that could make sense to define them manually (no need for parser generators) and they should have human-comprehensible meanings instead of using parser generators as we would do in a compiler.

Note that CodeMirror provides also a library to test tour mode, and I really appreciate that. These are a few of mine tests:

Consider the first test: it says that the first word (class) should be recognized as a keyword while the second (car) as definition (or def).

The only problem with writing this code is that the plantuml grammar is… suboptimal. It is used for a lot of different types of diagrams and it is not so clear to me. I would definitely not suggest it as an example of a well designed DSL.

What I want to do in the future

Once I am finished with the syntax highlighting I want to implement the auto-completion. This would make much easier for me to write UML diagrams: currently I have always to look up at examples to figure out how to do things. Some support from the editor would help greatly. It would be fantastic to have also error reporting as you type but that could be a bit more complicate to build.

The next step is to write a web application around the erd program. I started creating the project (erd-web-server), let’s see when I can find the time for playing a little more with Haskell…

Once I have done that I would work on the GitHub ingreation. I would like to access diagrams in my projects and to generate the images as part of a git web hook.

So there is plenty of room for improvements and also an engineer can have fun with diagrams, especially building the tool chain around them.

Getting started with Docker from a developer point of view: how to build an environment you can trust

Lately I have spent a lot of thoughts on building repeatable processes that can be trusted. I think that there lies the difference between being an happy hacker cracking out code for the fun of it and an happy hacker delivering something you can count on. What makes you a professional it is a process that is stable, is safe and permit you to evolve without regressions.

As part of this process I focused more on Continuos Integration and on techniques for testing. I think a big part of having a good process is to have an environment you can control, easily configure and replicate as you want. Have you ever updated something on your development machine and all the hell breaks loose? Well, I do not like that. Sure, there are a few tools we can use:

  • Virtualenv when working on python, to isolate the libraries you want to access
  • RVM and Gemfiles to play with different versions of Ruby/JRuby + libraries for different projects
  • Cabal, which permits to specify project specific sets of libraries for Haskell projects (and BTW good luck with that…)
  • Maven to specify which version of the java compiler you want to use and which dependencies

These tools help a lot, but they are not nearly enough. Sometimes you have to access shared libraries, sometimes you need a certain tool (apache httpd? MySQL? Postgresql?) installed and configured in a certain way, for example:

  • you could need to have an apache httpd configured on a certain port, for a certain domain name
  • you could need a certain set of users for your DB, with specific permissions set
  • you could need to use a specific compiler, maybe even a specific version (C++’11, anyone?)

There are many things that you could need to control to have a fully replicable environment. Sometimes you can just use some scripts to create that environment and distribute those scripts. Sometimes you can give instructions, listing all the steps to replicate that environment. The problem is that other contributors could fail to execute those steps and your whole environment could be messed up when you update something in your system. When that happen you want a button to click to return to a known working state.

You can easily start having slightly different environments w.r.t. your other team members or the production environment and inconsistencies start to creep in. Moreover if you have a long setup process, it could be take a long time to you to recreate the environment on a new machine. When you need to start working on another laptop for whatever reason you want to be able to do that easily, when you want someone to start contributing to your open-source projects you want to lower the barriers.

It is for all these reasons that recently I started playing with Docker.

What is Docker and how to install it

Basically you can imagine Docker as a sort of lightweight alternative to VirtualBox or other similar hypervisors. Running on a linux box, you can create different sort-of virtual-machines all using the kernel of the “real” machine. However you can fully isolate those virtual machines, installing specific versions of the tools you need, specific libraries, etc.

Docker runs natively only on Linux. To use it under Mac OS-X or Windows you need to create a lightweight virtual machine running Linux and Docker will run on that virtual machine. However the whole mess can be partially hidden using boot2docker. It means some additional headaches but you can survive that, if you have to. If I can I prefer to ssh on a Linux box and run Docker there, but sometimes it is not the best solution.

To install docker on a Debian derivative just run:

Our example: creating two interacting docker containers

Let’s start with a simple example: let’s suppose you want to develop a PHP application (I am sorry…) and you want to use MySQL as your database (sorry again…).

We will create two docker containers: on the first one we will install PHP, on the second one MySQL. We will make the two containers communicate and access the application from the browser on our guest machine. For simplicity we will run PhpMyAdmin instead of developing any sample application in PHP.

The first Docker container: PHP

Let’s start with something very simple: let’s configure a Docker image to run httpd under centos6. Let’s create a directory named phpmachine and create a file named Dockerfile.

Note that this is a very simple example: we are not specifying a certain version of httpd to be installed. When installing some other software we could want to do that.

From the directory containing the Dockerfile run:

This command will create a container as described by he instructions. As first thing it will download a Centos 6 image to be used as base of this machine.

Now running docker images you should find a line similar to this one:

You can now start this container and login into it with this command:

Once you are logged into the container you can start Apache and find out the IP of the docker machine running it:

Now, if you type that IP in a browser you should see something like this:

Screenshot from 2015-03-08 17:13:53

Cool, it is up and running!

Let’s improve the process so that 1) we can start the httpd server without having to use the console of the docker container 2) we do not have to figure out the IP of the container.

To solve the first issue just add this line to the Dockerfile:

Now rebuild the container and start it like this:

In this way the port 80 of the docker container is re-mapped into the port 80 of the host machine. You can now open a browser and use the localhost or 127.0.0.1 address.

Wonderful, now let’s get started with the MySQL server.

The second Docker container: MySQL server

We want to create a Dockerfile in another directory and add in the same directory a script named config_db.sh.

Note: we are not saving in any way the data of our MySQL DB, so every time we restart the container we lose everything.

Now we can build the machine:

Then we can run it:

And we can connect from our “real box” to the mysql server running in the docker container:

Does everything works as expected so far? Cool, let’s move on.

Make the two docker containers communicate

Let’s assign a name to the mysql container:

Now let’s start the PHP container telling it about the mysqlcontainer:

From the console of the phpmachine you should be able to ping dbhost (the name under which the phpmachine can reach the mysql container). Good!

In practice a line is added to the /etc/hosts file of the phpmachine, associating dbhost with the IP of our mysqlmachine.

Installing PHPMyAdmin

We are using PHPMyAdmin as the placeholder for some application that you could want to develop. When you develop an application you want to edit it on your development machine and making it available to the docker container. So, download PhpMyAdmin version 4.0.x (later versions require mysql 5.5, while centos 6 uses mysql 5.1) and unpack it in some directory, suppose it is in ~/Downloads/phpMyAdmin-4.0.10-all-languages. Now you can run the docker container with php like this:

This will mount the directory with the source code of PhpMyAdmin on /var/www/html in the* phpmachine*, which is the directory which Apache httpd is configured to serve.

At this point you need to rename config.sample.inc.php in config.inc.php and change this line:

In this way the phpmachine should use the db on the mysqlmachine.

Now you should be able to visit localhost and see a form.There insert the credentials for the db: myuser, myuserpwd and you should be all set!

Screenshot from 2015-03-09 19:59:07

How does Docker relate with Vagrant and Ansible, Chef, Puppet?

There are a few other tools that could help with managing virtual machines and sort-of-virtual machines. If you are a bit confused about the relations between different tools this is an over-simplistic summary:

  • Vagrant is a command line utility to manage virtual machines, but we are talking about complete simulations of a machine, while Docker uses the kernel from the Docker host, resulting in much lighter “virtual machines” (our Docker containers)
  • Ansible, Chef and Puppet are ways to manage the configuration of these machines (operationalising processes) they could be used in conjunction with Docker. Ansible seems much lighter compared to Chef and Puppet (but slightly less powerful). It is gaining momentum among Docker users and I plan to learn more about it.

This post gives some more details about the relations between these tools.

Conclusions

In our small example we could play with a realistic simulation of the final production setup, which we suppose composed by two machines running CentOS 6. By doing so we have figured out a few things (e.g., we have packages for MySQL 5.1 and it forces us to not use the last version of PhpMyAdmin, we know the complete list of packages we need to install, etc.). In this way we can reasonably expects very few surprised when deploying to the production environment. I strongly believe that having less surprises is extremely good.

We could also just deploy the docker containers itself if we want so (I have never tried that yet).

Update: I am happy the guys at Docker cited this article in their weekly newsletter, thanks!

Continous Integration on Linux and Windows: Travis and AppVeyor

Recently I worked on improving the testing and the Continuos Integration (CI) configuration for a few open-source projects. In particular I have spent some time on WorldEngine, a world generator written in Python, which uses a C++ extension named plate-tectonics.

There have been several issues, the main two are:

  • the deployment of the application on windows is problematic
  • the applications do not behave in the exact some way on Linux and Mac OS-X

To mitigate these issues I invested some time in writing better tests and improve my usage of Travis (CI for Linux) and start using AppVeyor (CI for Windows). While my solution is still not perfect I feel I am far better covered from regressions on the different platforms and I have a more reliable development process.

Travis

Travis is well-known in the open-source community because of three nice qualities:

  1. It is free for open-source projects
  2. It integrates seamlessly with GitHub
  3. It is very easy to use

Getting started with Travis is very easy: you simply register and connect your GitHub profile, You then select on which projects you want to activate Travis.

At this point you will see a list of your projects. The green or red color used for the project names make immediately evident which projects need to be fixed. You can also take a look at a specific build and see what caused it to fail.

Screen Shot 2015-03-02 at 11.09.46

 

Everytime you push to Github, whatever in the master or in another branch, a build is started. If the branch you are building is used in a pull request a badge indicating if the build failed or succeeded.

Screen Shot 2015-03-03 at 10.17.11

Travis, setting up different C++ compilers

While having your tests all passing on one platform is good, having them passing on different platforms is great. For example it is a very good thing to verify if you C++ code compiles correctly both with gcc and clang. This is particularly important if support for C++’11 is needed and it can be affected by using the wrong version of the compiler. You can do that by creating a .travis.yml file containing these lines:

Now, what you want to do typically is to install specific versions of the compilers, to have a completely controlled environment, and not just using whatever happens to be installed on the machine Travis is offering you. Doing that is pretty simple, you just use apt-get to install whatever you need to install.

Given you are a smart guy I am sure you can adapt this example to your specific case.

Travis, setting up different Python versions

Let’s build our application with a few different versions of Python:

In this case we do not manually install Python but we relay on Travis having the correct versions already installed. To find out more about Python on travis read here.

A useful trick is to use a different requirements file depending on the python version (damn you Python 3!):

My Travis files

These are a couple of complete Travis files I am using.

 plate-tectonics (C++)

WorldEngine (Python, using C++ extensions)

Civs (Clojure)

Javaparser (Java)

Ok, you got the picture. Travis is awesome and you can use it with a lot of different languages. This is easy to get started with (see the Civs script) but it is also flexible and powerful, if you need it to be.

AppVeyor

Recently I was very bugged by problems about building plate-tectonics and WorldEngine on Windows. Luckily Bret (who is maintaining WorldEngine with me) pointed me to AppVeyor which is basically Travis for Windows.

This is out we configured it for our project, so that it can build our library on 6 different versions of Python:

  • Python 2.7, 32 bit
  • Python 2.7, 64 bit
  • Python 3.3, 32 bit
  • Python 3.3, 64 bit
  • Python 3.4, 32 bit
  • Python 3.4, 64 bit

One feature of AppVeyor is really great: it store artifacts generating during the build.

Screen Shot 2015-03-03 at 10.07.42

We use AppVeyor to generate the Windows binary wheels for our library and then we upload them on Pypi. The project of uploading the files on Pypi is manual at the moment because we want to have some control on it (we do not want just to upload a new version everytime we do a push).

Badges

Badges are nice, and permit to check the status of your project.

It could sound childish but those small virtual stickers can motivate you to fix problems as they arise, because you want to be proud of your green status, on both Travis and AppVeyor.

Screen Shot 2015-03-03 at 10.28.23

Conclusions and thoughts for improvements

I think that CI integrations is fundamental to give solid basis to your projects. I sleep better since I have start using it.

However there is still a lot of room for improvements and a few ideas:

  • I am still missing a CI solution for Mac OS-X

  • I could use docker under Travis to verify the building process under different distros

  • I sometimes use Travis in a trial and error fashion: if I do not have access to a Windows machine I just cut a separate branch and push furiously to it, to trigger builds on AppVeyor and collecting feedback on the building process under Windows. This seems silly… but it works for me 🙂

Bonus

If you want to use Travis with Perl you can read this.

(Building Binary Wheels for Windows using Appveyor)[https://packaging.python.org/en/latest/appveyor.html] is an interesting reading for Python developers targeting Windows users.

Mocking in Java: why mocking, why not mocking, mocking also those awful private static methods

Unit tests: there are people out there surviving without, by in many cases you wan to have this life insurance. Something to protect you from slipping an error, something to accompany your software also when you will have long forgotten it and someone else will have to figure out how to maintain that legacy thing.

Why Mocking

If you want (need?) to write unit tests, by definition they have to test in isolation the behavior of a small unit (typically a class). The behavior of the tested method should not depend on the behavior of some other class because:

  • the other classes will change (impacting our test),
  • because the other class is not predictable (random generator, user input),
  • because it depends on external elements (network, databases,other processes),
  • because the other classes could require a complex initialization, and you do not want to do that

The unit test should reply to the question:

Does my unit work in an ideal world, surrounded by extremely nice and well behaving neighbours?

Mocks provide that ideal world. You can try the luck of your system in the ideal world using other kinds of tests (like integration tests).

The argument “mocking is a bad thing”

Some people argue that you should not use mocks, because you should build your systems in such a way that make mocking not necessary.

I think this is partially true: if a system is well designed, with testability in mind from day 0, probably it will require much less mocking than otherwise. However you will still need mocking for three reasons:

  • you will inherit systems which either have no tests or have a low coverage. If tests are added as an afterthought the system is not designed to be testable and you will end up using a lot of mocking
  • it is true that in most situation you can avoid using mocking if you build your system using a lot of interfaces and dependency injection. However it means in most cases to use a fair amount of overengineering. You will end up having all over the places interfaces like FooBarable, BarFooator, and then classes like DummyFooBarable, HttpFooBarable, etc. Good, now you can avoid mocking but your system is became one of the reasons why other programmers laugh at Java code
  • sometimes your unit is a method, not a class, so you want to mock the rest of the class (partial mocking). Suppose you want to test method foo of class Foo. This method invoke bar and baz of the same class. If these methods interact with a lot of other classes (Bazzator, Bazzinga, BazLoader) it could be easy to just mock the methods bar and baz instead of mocking these other classes. Another advantage is that it make your tests more readable: you could write something like when this.bar() return 3 than this.foo() should return false instead of building a complex test to create the conditions under which this.bar() return 3

So, yes, you should not be mocking all the times, but many times you do not have an alternative and in some cases the alternative is way worse.

Mocking basics

Ok, let’s start by specifying our dependencies. We will use both Mockito and Easymock together with PowerMock for extra power. PowerMock complements both of Mockito and Easymock.

My scenario

I had to work with a legacy application: you know, the kind of application that one wants to touch even with a pole, the one with all the original authors disappeared (deported for their crimes?). You get the picture. Now we had to do a tiny change to this application and then run away like hell. Given we are good professionals we wanted to write a test: the problem was that our change was inside a very large method which was private and static. The method is named dealsToDisplay. Given it is private and static we invoke it through reflection (see invokeDealsToDisplay()). The actual tests are in dealsToDisplayNoBlacklistingTest, dealsToDisplaySomeBlacklistingTest, dealsToDisplayCompleteBlacklistingTest. All the rest is mocking and plumbing.

Mocking static methods

In my scenario I had to:

  • invoke a private static method
  • mock several static methods

The former is easy, you just have to use reflection:

We start by finding the method and we set is accessible (setting back the previous value when we are done). At this point we can invoke it. Nice. Sort of.

To mock static methods we have instead to use PowerMock which does the trick by using a custom Classloader and doing bytecode rewriting on the fly. Yes, it does not sound safe. No, there are no alternatives that I am aware of.

So we need to do a few things, first of all we have to instruct PowerMock to take care of loading the class through its Classloader:

Then we have to declare which methods we intend to mock:

Finally we mock them, specifying what result do we want when they are invoked:

Conclusions

Our solution required a considerable amount of pluming and mocking, mocking and plumbing, to test a very limited functionality. While we are happy with the result and have reasonable confidence with this not destroy that old piece of code, it is clear that it is not an ideal scenario. But sometimes you have to  do what you gotta do.

Bonus: mocking singletons

A common issue is the necessity of mocking singletons. While you can write your own recipe reusing the code presented in this post, you can also take a look at this post:

Mocking a singleton with EasyMock and PowerMock

Happy mocking!

P.S. If you have suggestions, or corrections please let me know!

Update

Steve Bennett wrote some interesting comments about this post, it is worthy taking a look on Steve Bennett’s blog.

Portability: stories of what can go wrong when run your code on another machine

In the last year I faced many surprises when running some well tested code on my dev-servers or my laptops. It is curious (and scaring) how code that has been widely used in production (sometimes for years) can still hide portability issues so that the first time you try that piece of software in slightly different conditions the unexpected happens.

I have experienced that both when working on some open-source projects and in some very big companies. The difference probably is that such problems tend to emerge sooner in open-source projects, if there is an active userbase, while in companies that control their development environment these little time bombs can remain silent and struck a lot of time after being put in place. In the following a list a few categories of portability issues that caused problems.

Locale configuration

This is something we constantly overlook but a lot of libraries do assumptions according to the locale configured on the current machine. If you are on a unix-ish box (linux, bsd, mac, etc.) open a console a run locale. You will get something similar.

Screen Shot 2015-02-09 at 10.08.54

These environment variables could affect the way dates are parsed or the even numbers are parsed. For example in Italian we use the comma instead of the dot to separate the integer from the fractional part of numbers so that “12.14” could not be parsed if you locale is set to Italian and be parsed if it is set to UK English. Or American expect the month to precede the day in dates. So:

02/01/2015

Could be the 1st of February for an American or the 2nd of January in most European countries. The way it is parsed could depend on the locale configuration.

You will notice that the locale configuration contains also a default encoding (UTF-8) in my case, so I would imagine that also encoding problems with text are possible. I did not face them yet this year but I will keep an eye open in that.

Locale configuration… over SSH

A variant of the previous problem (or a multiplier of it) is that locale configuration can be transferred when ssh-ing on a machine. By default if you connect, let’s say, from a machine with an Irish locale to a machine with an US locale the console opened will be configured with the Irish locale. Imagine how fun is to try to debug this problem: a colleague of yours (with the American locale) ssh into that machine and does not see any problem, then you ssh into it and run in the problem magically appearing just for Irish folks (should we suspect Leprechauns?).

How can you avoid that? Simple, you can solve it either preventing the client to send the environment configuration or preventing the server from accepting it. To prevent the client from sending it open your /etc/ssh_config and look for these lines:

Now, remove these bad boys and save yourself some headaches. For preventing the server from accepting it you have to look for the configuration of the ssh daemon (sshd).

Bonus solution: fix your software to not depend on the locale configuration

Poor man solution: force the locale to the holy working value (typically en_US.UTF-8) before starting compiling or running the locale-dependant/buggy application

Timezone

I found out that some tests were passing if they were ran in a certain timezone… hint: was not the timezone where I was in

Why was it happening? Because some functions had an hard-coded timezone, while others had not. Now, it has been very confusing to solve this issue because a value obtained from parsing a date like: 1/1/2015 ended up being transformed in 2/1/2015 (2nd of January) after a few passages. So, be sure to not being silently using the current timezone in some places and use an hard-coded one (says, UTC) in others. Or be ready to deal with weird bugs. I wonder what happens when the summer time is enabled or disabled… fun time.

Version dependent implementations

Sometimes the problem is that you are doing something really stupid and do not realize because it happens to work on a very specific configuration. Those are among my favourite bugs. Suppose for example that you write a test checking if a certain value is present as the first element of an array. So far so good. The problem is this array is obtained by iterating over a Set which does not give any guarantees about the order of the iterated elements (they are not sorted by any known and sensible function and they are not necessarily in the same order they were inserted).
Now, until you run your tests on a machine with the same architecture, and the same version of the standard libraries (the same JDK in this case) you do not notice any issue, and you will not notice them until a new version of the JDK is released which return the values of that implementation of Set in a different order (absolutely legit). And now your tests do not pass. Have fun finding out the root cause.

Compilers

This will deserve a series of post of its own. I experienced that while working on C++ code using some features from C++ ’11. In particular I was trying to make the some codebase work on:

  • gcc
  • clang
  • mingw
  • visual c

I was very surprised by the warnings (and even errors) that some compilers report on code that other compilers are perfectly fine with. The worst thing was one function (a pretty important one) of the standard library were not available under one particular platform. I figured out after I started using that function all over the same and when I tried to port my application to a new compiler, I ended up making the feature using that function unavailable/crappy on that platform. Definitely not satisfying but at least I remembered why I stopped programming in C++. The advantages of the JVM are easily overlooked. And everything in the end is easier to port than C++ code.

Conclusions

This sort of issues make me wonder how software can work at all: the number of possible errors that can go unnoticed is simply mesmerising.  I think the only answer is release, test, stress your code in any way possible and be anyway ready to face all sort of problems leading to interesting debugging sessions. If you have talented, well-educated and patient developers maybe your code will be working as desired a reasonably portion of time. Maybe.