I really don’t know when it happened, but it’s been a year since I joined #GetResponse as Software Architect 😵 It was an intense and fruitful time worth summing up!
Working with GetResponse
When I was looking for a new job a year ago, I went through a lot of recruitment processes, but in the end I only considered 2 offers, from which I chose GetResponse. In retrospect, I do not regret this decision, and I am very happy that I joined this team 🙂. The company offers 100% remote work, which was especially important to me after years spent on the train, between home and the office. It is also essential that the service itself is large, complicated and demanding - it is a real challenge in many aspects. The IT department is large and composed of many specialists from whom you can learn a lot. The technology stack is also substantial, although - there is no need to hide - there is also a large tail of technical debt…
My role in GR
I joined GetResponse as a Software Architect and became a member of a cross-area team, so I was kind of a free electron for a year - sure, I had some projects to do, but beyond that I could find potential optimisation spots, point the problems and implement things that affect the software development process. I will not hide that this role suits me very well, because it gives me a chance to show off. It requires some kind of self-discipline and organisation, but thanks to this it is possible to undertake work that probably would not qualify for sprints focused on business goals…
Completed initiatives
During this year, I managed to do many small and big things, a very large part of which was my initiative. Simply carrying out tasks, I encountered various problems, nuisances or complications, which I signaled, consulted, and which often turned into new tasks or projects. So let’s see what we managed to ship, more or less chronologically 😉
Implementation of Rector in the development process
#Rector can help with upgrading outdated dependencies, as it includes a series of rules to eliminate discontinued usages. Of course, not only can it help us with this, it also contains a lot of other rules that improve the existing code (e.g. by using newer language syntax features), without changing its logic (this makes it safe, but remember to never trust any code modification tool in 100%).
I was able to implement this tool so that it would non-invasively, step by step, refactor the application - in the CI process there is a job that is allowed to fail, thanks to which it does not block the process (it analyses files changed within the Merge Request and suggests potential improvements). Locally, any developer can use handy Composer scripts to automatically apply Rector’s suggestions.
We also managed to write non-standard rules, thanks to which we can systematically improve the code in the legacy area (elimination of Hungarian notation, direct use of constants).
Improving PHPStan implementation
#PHPStan was already implemented in the process when I joined GR, but unfortunately some things were done not optimally, so it didn’t have as much value as it could. On my initiative, we managed to introduce support for baseline files, raise the level of analysis, add extensions (Symfony, Prophecy, MyCLabs
enum) that improved the quality of the analysis.
In the CI process, it was possible to parallelize the analysis by running nested pipelines. For various reasons, we had to depart from the suggested by the author full analysis, and we have separated 4 areas, analysed separately. In the past, these 4 areas were analysed one after another as part of one job, which took very long, but also failure in any area (except the last one) resulted in a lack of analysis in subsequent areas, which made it difficult to correct errors at once. However, I was able to modify this process so that the main pipeline triggers child pipeline with 4 separate jobs that can be executed in parallel, which shortens the total analysis time.
In the meantime, I also initiated and performed a PHPStan upgrade from version 0. *
to version 1.*
, which was necessary to keep receive updates and be able to use the new features of the tool.
There is still a lot to improve in this area, but a lot has already been achieved over these 12 months.
Easy Coding Standard implementation
When I joined GR, there was no automation of the coding standards (other than the guidelines on Confluence 😉) - it’s not a problem per se, but it increases risk of inconsistencies. So I took the initiative to introduce #ECS into the project and then, together with one of my colleagues, I implemented it. In the initial phase, the CI job also was allowing failures, so developers had time to apply fixes in their areas. At some point, however, the job became required and now it blocks merge when violations of the standards are found in the code. Developers can of course apply these patches locally, using prepared Composer scripts.
Central Traefik
In GetResponse, apart from the main application, there is a lot of other projects, and each of them (or the vast majority) offers a Docker stack for local development. Applications within these stacks expose domains, and they do it using Traefik. Unfortunately, the implementation was not optimal, because each stack had its own Traefik service, which made it impossible to run multiple stacks at the same time (because only one Traefik could connect to 80/443 ports). Of course, it was totally possible to work with these applications, but it was harder to integrate them with each other, also switching context was not developer-friendly (necessity of turning off one stack before starting another).
On my initiative, we have introduced the central Traefik, which runs in the background as an independent stack, and the remaining stacks expose domains using labels in services that need to be available via HTTPS. Thanks to this, you can run any number of environments, and the stacks can communicate with each other (via a shared Docker network).
An additional advantage of this solution is that SSL certificates only need to be updated in one place 🙂
Captain Hook implementation
All the QA tools that had been implemented / improved so far had one major drawback - the emphasis on their usage was placed on the CI process. We wanted to lighten the runners’ load a bit and introduce fail fast approach, so more problems would be discovered on the developers’ side, on their computers. To achieve this, we implemented CaptainHook 🪝
Thanks to this, it is possible to:
- validate PHP files (linter)
- check compliance with coding standards
- static analysis with PHPStan
- validate commit title (Jira task required 😉)
Some of these tasks are performed before commit, others before push, some for both. Overall, the goal is for the developer to get instant feedback before sending the code to the central repository. It happens automatically, because hooks are installed automatically when the project is launched (specifically after installing Composer packages).
Direction: PHP8!
Coming to GR, I had a few months of coding in PHP8 behind me, and here I had to take a step backwards as the main system is still based on PHP 7.4. Again, it’s not a problem per se, because system works stable - it’s all about unavailability of some language features introduced in newer PHP versions, which simplify the code and make working with it more comfortable.
So I dug the topic down and managed to make a number of changes to ensure compatibility with PHP8 by myself or with the help of others: updating dependencies to versions supporting PHP8 (or getting rid of them, if they turned out to be unnecessary), improving the existing application code and internal dependencies (e.g. improving signatures of methods implemented from interfaces, getting rid of the word resource
from namespaces, etc.). There is still a long way to go, because we are blocked by other conditions, but when it comes to the code itself, we are much closer to PHP8 than when I started working with the project 🙂
Gitlab as a Code Review Tool
I see this as one of my biggest successes - I managed to convince decision makers in the IT department to change the Code Review tool: from Crucible we switched to Gitlab Premium! I was the initiator of this project, but also one of the coordinators of the entire process - from gathering information about the requirements set by development teams and about the tool’s capabilities, through supporting the migration process, to preparing a document describing the Code Review process in Gitlab.
Previously, when code review was performed on Crucible, Gitlab was also in the toolset, but not in the Premium version, and was only used to store the code and carry out the CI/CD process. This had a major disadvantage - code review was performed in isolation from the QA process result, so the reviewers either verified the pipeline result manually or left it on the shoulders of the author of the changes and focused only on what changes were made, not if they worked.
More subjectively: Crucible’s UI is specific. People who are used to Gitlab / Github do not feel comfortable there 😅. From my perspective, this change drastically improved the way you work with the code review of delivered changes.
A separate topic is the transition to the Premium version, which offers many features to improve processes:
- code owners
- merged result pipelines
- merge trains (we do not use these yet)
However, for people who felt a strong habit to the previous process, just today we have completed the integration of Gitlab-Jira through development panel - and yes, I initiated and piloted this too 😉
Organisational structure in Gitlab
I proposed and implemented (so far manually managed) an organisational tree structure that defines development teams and area meta-teams. Thanks to this, it is possible to:
- assign specific groups of people as code owners
- mention groups (teams) in Merge Requests
- grant permissions to projects or entire project groups by sharing them with a group (which significantly simplifies access management, although has its drawbacks)
Operating on groups greatly facilitates onboarding/offboarding or any inter-team reshuffling, because users only need to be added/removed to/from groups, instead of tediously going through countless places in order to grant/revoke individual permissions.
Changing the deployment process
The main GetResponse application’s deployment is quite complicated, and I won’t describe it. In any case, it takes a long time because it is multi-stage and requires various types of verification. When I joined GR, the process of such a deployment basically was blocking the main development branch into which developers were integrating their changes. Therefore, it was not possible to consider it as #Continuous Integration, which had obvious drawbacks (e.g. late finding out conflicts between changes introduced by different people).
So I proposed some minor changes that made the main development branch unlocked, and developers can integrate into it at any time (when the changes pass the code review and QA process, and are approved by reviewers). In addition to outlining the concept, I also dealt with the adaptation of the documentation on Confluence and the modernisation of the Gitlab CI definition.
We’re still looking at this process and thinking how it could look like. I’m sure there’s much more to improve in order to make delivering value to GetResponse’s clients more comfortable, enjoyable and efficient from developers’ perspective.
Shared .idea
directory
I don’t know the exact data, but it seems the vast majority of GR developers use PHPStorm. It’s a great development environment that helps a lot in working with projects efficiently. We have a local runtime environment based on Docker, all kinds of scripts and tools that unify the production process. But how to automate it within the IDE?
The .idea
folder comes in handy - it’s a metadata container in which PHPStorm stores a lot of information about how the project should work. It’s common to see this directory added to .gitignore
, but is that right? I used to think so, but when I started working with GetResponse, I kept finding myself having to configure a lot of things by hand and thought to myself, “Why does every new employee have to waste time setting up something that should be available immediately after cloning? project?"😉
So I took the initiative to follow JetBrains recommendations of sharing some settings by adding specific files from .idea
directory to the #Git repository, which I then implemented. Thanks to this, all developers share the following configurations:
- PHP version required to run the application
- remote PHP interpreter based on Docker container in local stack
- XDebug using a remote interpreter
- connections to databases running in the Docker stack
- Symfony plugin
- #PHPStan and PHPCodeSniffer inspections based on a remote interpreter
- Run/Debug configurations (running various
make
or Composer scripts from the IDE menu)
The main advantage of this approach is the minimisation of the work that needs to be done to start working with the project. Everything you need just works right after setting up the project. And if new configurations are added as the project develops, they will appear automatically for each developer.
Modernisation of Renovate Bot integration
Renovate is a tool that automates the tedious process of maintaining up-to-date dependencies. It was already implemented when I joined GR, but I wouldn’t be myself if I hadn’t found a hole in the whole 😉
There were several details that were affecting package upgrading process:
- Renovate Bot was running from a non-standard image which generated additional maintenance overhead and caused technical debt
rangeStrategy
was set so that the bot modifiedcomposer.json
by setting the latest versions of packages as constraints, which could lead to problems with dependency resolution (in fact, the lowest version should be required, those that actually contain what is needed to run application - all newer versions are only optional from the perspective of our project, and the wider range of supported versions means the higher probability that the dependencies of other packages will not conflict with our requirements). At the same time, modification ofcomposer.json
was causing a change ofcontent-hash
incomposer.lock
, which, due to the distributed work of many people, often led to conflicts in Git…- Merge Requests issued by the bot contained mass package updates, which made their verification difficult, and when problems were encountered it wasn’t clear which package caused them
- Merge Request titles were not friendly
So I sat down and modified the process. We started using the official Renovate image, which includes everything we need to analyse the dependencies of many popular package managers. I created presets, which defined our expectations for PHP packages (ignoring 0.*
development versions; version updates only within the declared constraint, with preservation of Semantic Versioning; updating individual packages within single MR; linking internal packages from the company’s on-premise Private Packagist in the table summarising the changes). These presets have been created in a separate repository so that they can be used in any PHP project. Support for other managers has been added - Renovate is also keeping track of Dockerfile
and .gitlab-ci.yml
. We also used the code owners mechanism (initiated earlier by me) to automatically assign reviewers to created merge requests.
All these changes allowed us to update dependencies on a daily basis without much work. Merge Requests contain single package updates, so the diff is small, the content-hash
does not change, so you can merge all issued MRs once they have been approved by reviewers and passed the pipeline. In theory, we could even let the bot automatically merge changes after passing the pipeline, but we haven’t matured for this yet 😅
Multi-stage Docker build
In my opinion, the Docker-based local environment is a must-have to think about achieving repeatable results for anyone working with the project. Such an environment should reflect the production environment, maybe not in the context of a full infrastructure, but at least the runtime environment: PHP version, available extensions, system libraries - all this should be consistent.
In GetResponse Dockerfile
had some discrepancies in terms of production state, also it was structured in such a way that build cache got outdated quite easily and each image rebuild took a long time. During this year, me and my colleagues with DevOps flairs managed to carry out a lot of work in this area, thanks to which:
- multi-stage build was introduced with an explicit division into the runtime environment (operating system, packages) and building the application itself (installing project dependencies, generating application cache or other static files)
- we separated development targets (used in CI to run QA tasks) and production targets (intended to run, surprisingly, in production 😉), having a common base, but differing in many aspects from each other
- to work with the application locally, we used a target that does not contain the application itself, because the application’s directory in the image is overridden by the
docker compose
volume. This reduces the time required to build an image locally and virtually eliminates the risk of build cache invalidation (operating system and packages change extremely rarely). - we managed to align the Docker runtime with the current state of the production (yes, we haven’t got there with Docker yet, but we’re close 😅) and create tasks to verify whether the extensions we added based on the production state are actually required (or maybe the code that used them is just dead)
It was a really good job 👍
Clean up Composer dependencies
When I started working with GetResponse, there were a lof of #Composer dependencies marked as abandoned. This means that they were neither developed nor supported, and thus they were a technical debt and a potential source of problems. I managed to clean up the dependencies so that there was not even a single abandoned package.
A lot of packages were also locked on a specific version, so I allowed them to update to newer ones. The record holder was aws/aws-sdk-php
, which was updated from version 3.103.2
to 3.208.5
😅 Remember: if you have a problem with your application after updating the package, try to diagnose the cause of the problem and eliminate it, if you can’t do it - report the bug on Github / Gitlab. Only as a last resort, block the faulty version of the dependency, still avoiding rigid constraints, because this is a straight path to a huge technical debt that will sooner or later explode.
Updated Symfony components
When I installed project’s dependencies for the first time, there were Symfony components even in the 3.4
version! We have successively upgraded the versions of these packages - first to 4.*
, and in recent days my Merge Request has been added to the main branch, which updated all components to the 5.4
version. It was not as easy as it looks, because the application uses a lot of internal components that are based on Symfony components and unraveling these dependencies in a dozen repositories required a lot of gymnastics. Unfortunately, we will have to wait for v6, because of PHP8 mentioned before…
Getting rid of the oldest versions of the components allowed usage of symfony/framework-bundle
and implementing a kernel that can automatically build a DI container (for people working with the regular Symfony it seems natural, but GR does not use the Symfony Framework, but only its individual components - the whole is glued together in a custom way).
Other
The initiatives described above were often spread over time and consisted of many stages. In between, various other, smaller tasks were resolved. I have provided advice and assistance many times, be it in the form of verbal discussions or as a reviewer in Merge Requests. I have implemented a lot of small improvements regarding the tools or processes used, but it makes no sense to describe all of them, because it would be difficult to remember them all 😉
Ongoing projects
Direction: Kubernetes!
All the work mentioned above with Dockerfile
and the rest of the build-related files had one goal: migrating the application to a Kubernetes cluster. We use Kubernetes clusters extensively at the development stage (test environments, Gitlab runners) and partially in production. As I mentioned, the GetResponse application does not use Docker (let alone k8s) in production, but we are close to that!
Perhaps this month we will manage to run an instance on the cluster and redirect part of the traffic there. However, it will not be the end of the work, because the GR infrastructure is more complex - the migration process to k8s consists of many stages, and I am one of the coordinators of this process 😁
Road to Cloud
In the meantime, I was honored with the role of Chief Architect in the Road to Cloud project, which aims to migrate GR services to the cloud. Before I took this role, a few small applications had already moved there, but for the entire organisation it was just a warm-up. I have been trusted, which I really appreciate, because let’s face it - I do not have much experience in this area, because I have not had the opportunity to use cloud solutions before. Fortunately, it is not that I have to do this migration myself - I work with the DevOps team and developers by coordinating the work, organising meetings, writing notes, creating and carrying out tasks and managing a roadmap. The project is demanding and time-consuming, and it has to be reconciled with other ongoing tasks, so it can be tough with the progress of work sometimes.
The mentioned migration of GetResponse applications to Kubernetes is in fact a stage of the Road to Cloud project: simply wanting to move to the cloud, we want to have a proven solution based on Docker. It will also make it easier for us to migrate to PHP8, because having a runtime environment wrapped in an image, we can freely change it without going beyond the development process (implementation is only an image replacement, no infrastructure work is required).
Numerical trivia
- So far, 125 of my merge requests on Gitlab (I do not know the number of MRs on Crucible, because it has already been turned off), I had a dozen or so additional requests on behalf of Renovate Bot (in the case of PHPStan updates, additional changes were often required)
- I participated in about 60 merge requests on Gitlab as a reviewer (again I do not know the number from Crucible, but it was probably twice as much or even more), adding an endless number of comments 😉
- I created 182 tasks in Jira (of which 106 have already been completed)
- I work on a MacBook for the 1st time, and during these 12 months I had as many as 3 🤪
Summary
It was a very successful year! Not everything went as expected. I didn’t do everything as I should. However, looking at the overall picture, taking into account all the things that have been achieved, I can confidently say that I am satisfied 🙂