Lessons Learned about Developer Experience from the Platform Adoption Metrics @ Naranja

[ad_1]

Key Takeaways

  • In order to properly treat your internal platform as a product, having accurate metrics of its adoption is key to success.
  • Naranja defined three main concepts to understand platform adoption: applicability, adoption and up-to-dateness.
  • Once your organization is aligned around these metrics, and once having the actual data, conversations around platform actual and potential usage and roadmap planning sessions tend to improve.
  • An automated solution to collect all this information is viable and relatively easy to implement, with resulting dashboards tailored to specific audiences.
  • We see space to continue improving on this with more refined models that we look forward to developing on top of this initial MVP.

Introduction

As software engineers, we have a bias towards laziness that usually results in us being better, more productive, engineers for it keeps us trying to automate things and create abstractions to eliminate duplicate code. But this impulse – which is certainly noble – sometimes takes us very far without really understanding the true economic impact of our initiatives, which eventually is what stakeholders will look at in order to define their viability.

At the Developer Experience team at Naranja, we are devoted to offering solutions to our internal customers, to allow them to focus on business-driven developments, by removing the need to resolve platform issues, which would often need to be done over and over again.

We decided to achieve this goal by applying product management to our internal platforms, and after gathering some experiences, we understood the importance of having metrics that reflect the impact of our work and enable data-driven decisions,  and implemented a solution to gather them and provide visibility throughout.

In this article we will cover the definitions that we arrived at in order to align our organization around these concerns, and the outcomes that resulted from implementing a solution that measures our work based on them and from the fact of having that data available.

Context

Naranja is the main credit card issuer in Argentina, with more than 5 million clients, 9 million credit cards, 200 branches and agreements with more than 260,000 commercial partners. In 2017, it started a digital transformation process, with the mission of becoming the most loved fintech in Argentina. This implied a total renewal of both its existing systems, as well as their culture and development processes.

Tribes, Squads and Projects

Within this context, Naranja shaped it’s organization’s with a structure inspired by the Spotify Model, with autonomous squads responsible for product delivery, grouped in tribes. Currently the IT area has more than 300 professionals, organized in 20+ squads, grouped in 5 tribes and two development centers (Buenos Aires and Córdoba). Their scope includes a mobile app, a website that includes self-management tools, a portal for commercial partners, and several other customer-facing applications and the services that power them. Each squad is responsible for the execution of one or more projects, which for the purposes of this initiative are equivalent to repositories in our SCM – GitLab.

Developer Experience

In order to support improving these teams’ efficiency, the Developer Experience area is organized itself in three different practices (Delivery, CloudOps and Development) which are responsible for agreeing with tribe leaders a solution roadmap, based on their prioritized development needs, and then their implementation, facilitating their adoption and providing support.

Products & Assets

For these solutions, the area adopted a product-based approach, treating each solution as a package that our clients can use with as much autonomy as possible, evolving them in response to our clients needs and even making clients part of the ideation process, with constant feedback. Products, conceptually, are a very important part of our metrics. Later on we’ll see how we use this concept as an aggregator of our indicators. Examples of our products are: Wiru (CI/CD solution on top of GitLab), Golden API (node.js REST API reference architecture), Zumo (UI Components).

Furthermore, our products are then composed of smaller units, that we will call assets. For example, Golden API is composed of many libraries and templates; each one of them is considered an asset. Wiru (CI/CD) is composed of jobs and templates. Again, each one of those is considered an asset.

Metrics

With these definitions, we aimed to try to understand how our different squads and tribes were using our assets in their projects. After some analysis and trying different approaches, we got to define three key concepts to measure: applicability, adoption and up-to-dateness. By applicability we mean the degree to which an asset or product applies to different projects, while adoption measures -within the frame of the applicability- which projects effectively adopted our solutions. Finally, up-to-dateness is an index that allows us to see if projects that actually adopted an asset are using the latest available version. Each one of these concepts deserves a more detailed explanation.

Applicability and Project Types

To properly understand applicability we first need to introduce another concept, namely “project types“: these are abstract architecture patterns that we defined for Naranja at the software component level. Examples of project types include: “microservice”, “frontend”, “bff”, “qa-automation”, etc. Also, each project type may have variants, like the platform used (serverless or container-based). But it’s important to be clear that these types are agnostic to the actual implementation.

That said, our definition of applicability is associated with project types, as it is indeed resolved by a matrix that relates assets with project types, thus obtaining each asset’s applicability. For example, let’s suppose we have ten projects in our organization. Three of them are of the type “frontend”, four “microservices” and three “bff”. On the other hand, we have an asset, “naranja-angular-authentication-module”, which is responsible for managing front-end authentication. This asset then we’d say has a 30% applicability (3 applicable projects / 10 total projects = 0.3) – for it can potentially be used in 3 projects (despite the fact of it being used or not, which we’ll cover later on).

There are also some details around how a project declares its types, that we’ll review along with some implementation details.

To complete the idea of applicability, another important definition that we found along the process is that of aggregated applicability. It took us some revisions to arrive at the conclusion that, when we try to understand applicability on a more global level -for example, to understand the applicability of a product as a whole- using averages did not properly reflect the overall usefulness.

Wiru (our CI/CD product) is a good example to explain this, as it is a product that takes care of a cross-cutting concern, it has some assets that apply to certain project types, and some that apply to others.

 















Project Type     Front Pipeline MS Pipeline BFF Pipeline
front-a Front     APPLIES NOT APPLIES NOT APPLIES
front-b Front APPLIES NOT APPLIES NOT APPLIES
front-c Front APPLIES NOT APPLIES NOT APPLIES
ms-a Microservice NOT APPLIES APPLIES NOT APPLIES
ms-b Microservice NOT APPLIES APPLIES NOT APPLIES
ms-c Microservice NOT APPLIES APPLIES NOT APPLIES
ms-d Microservice NOT APPLIES APPLIES NOT APPLIES
bff-a BFF NOT APPLIES NOT APPLIES APPLIES
bff-b BFF NOT APPLIES NOT APPLIES APPLIES
bff-c BFF NOT APPLIES NOT APPLIES APPLIES
Applicability   30% 40% 30%

Using this example, assuming Wiru has three assets (one for microservices, another one for front-end and another one for BFFs), if we…

[ad_2]

Read More:Lessons Learned about Developer Experience from the Platform Adoption Metrics @ Naranja