DerScanner > News > Building a secure development process for a retailer. Experience of a major project
Some time ago, we finished building a secure development process based on our application code analyzer for a very large retail company. Let’s be honest here: this was a difficult and long experience, but it gave a powerful impetus for both the development of the tool itself and the growth of competencies of our dev team as far as implementation of such projects goes. We want to share this experience with you in a series of articles. We’ll tell you about how it went, what pitfalls we had to face, how we got out of tough situations, and what it all did for the client and for us. We’ll be talking about the meat of the implementation. Today, we will cover the secure development of the retailer’s web portals and mobile apps.
First of all, a few words about the project in general. We have built a secure development process in a large retail company where the IT department has a huge staff and is split into many directions that hardly ever overlap. These directions can be divided into three main groups. The first, very large group, was concerned with the cash software, written primarily in Java (90% of all projects). The second group of systems, the most extensive one in terms of code volume, worked with SAP applications. And the third block was a hodgepodge of web portals and mobile apps: all sorts of external websites for the company’s customers, mobile apps for those websites, as well as internal resources such as mobile apps and web portals for the retailer’s staff.
The overall goal, as formulated by the customer, the IS department, sounded fairly mundane for all three groups: “We want to have fewer vulnerabilities and a secure development process for all systems created internally.” In practice, however, things looked very unique between the groups, as we had to make a million different compromises at each step of implementing secure development. Some nuances helped to build the process, while others would sometimes get in the way. In the end, we managed to create a more or less common approach for most projects.
We formulated this approach as simple as possible: the code most relevant to all developers would be scanned. To put it in GitFlow terms — all groups of projects, with the exception of SAP, were running develop branches in GitFlow — the main develop branch would be scanned on a schedule.
As always, there are exceptions to every rule: the general approach could not be applied as is everywhere for a number of reasons. Firstly, our tool, the code analyzer, has several limitations stemming from the fact that we want to be able to run the deepest analysis of some programming languages when necessary. For Java, analysis by bytecode is much deeper than by source code. Consequently, for scanning Java projects, we needed to pre-build the bytecode and only then send it for analysis. With C++, Objective C, and iOS apps, the analyzer was built into the process at the build stage. We also had to account for various individual requirements from the developers of all the projects. Here’s how we built the process for web portals and mobile apps.
It seems like all of these apps are lumped into one logically sound group, but in reality, they were an awful mess. There were over 120 (!) web portals alone. The company is very large, with many business, administrative and technical units, and oftentimes one of those decides it needs its own portal and mobile app. This portal and app are created, used for a while, and then abandoned for good. As a result, at the initial stage, we had to make an inventory for the customer, as even the developers of these apps did not have a uniform list of codebases. For example, to manage the repositories in this group, the developers used two GitLabs, administered by different people. On top of that, a significant portion of the portals and mobile apps had been implemented externally. Therefore, when a release was nearing, contractors would often resort to handing the company the source code for the new version on a flash drive or some such. The company ended up with a menagerie of apps and a complete mess of code. We had to make a list of all the projects, find all the people responsible for them, including tech owners and team leaders, and then decide together with the main customer, the IS department, which of those we would analyze.
As a result, we chose to analyze the production systems and the software used in support, while the archive systems were not touched at all. A number of internal applications were considered non-critical since they could not cause any financial harm to the company and therefore were not chosen for the analysis. An example of that was the system for managing packers or movers within a certain warehouse. There was nothing in it that would be sensitive for the company’s external customers, and their hacking by someone from within the company would only cause minor inconvenience to a number of departments.
For this group of software, the IS department assigned the priority task of introducing code analysis for vulnerabilities, and the developers requested the building of a user-friendly verification process integrated into the development cycles.
Two different versions of GitLab were used as a version control system for the web portal and mobile app groups.
Setting up the integration with GitLab
Not all apps used CI/CD, and where it wasn’t available, we had to insist on using it. If you want to really automate the process of testing code for vulnerabilities, as opposed to manually uploading a few links for analysis, to make sure that the system itself is able to download it into the repository and return the results to the relevant specialists, you have no other choice than install runners. In this case, runners are agents that automatically contact version control systems, download the source code and send it to DerScanner for analysis.
The developers in the web portal and mobile app group wanted to organize secure development as a semi-automated process so that the code would be scanned for vulnerabilities without any involvement on their part. With that implemented, the security officer would verify the results of the vulnerability analysis and set tasks for the developers in Jira, should the vulnerabilities be deemed critical, or send them to the developers for clarification. The developers would then decide whether the vulnerability needed to be fixed urgently or not. If needed, they would plan the fixes to be included in a certain upcoming release.
Jira was mainly used as a bug tracker where DerScanner would automatically feed information about the vulnerabilities found.
Setting up integration with Jira
On rare occasions, team leaders would personally review the scanning results and create tasks in Jira manually.
Creating a task in Jira
We covered such cases as a special feature in our standard procedure. In some projects, all of the fixes were discussed in Slack or Telegram, and tasks were created in real-time.
As a result, after the implementation of DerScanner, the secure development process was looking like this: portals would be checked daily for changes in the code of the main develop branch. If the main, most up-to-date develop branch hadn’t been updated in a day, nothing would happen. If it had been updated, that branch would be sent to the appropriate project for that repository to undergo analysis. In GitLab, the repository corresponded to a certain project in DerScanner, and it was there that the main branch was scanned. After that, the security officer would review the results of the analysis, verify them, and start creating tasks for fixes in Jira.
Results of the analysis and tasks for fixing vulnerabilities created in Jira
The vulnerabilities were usually patched starting with the critical ones that had to be dealt with urgently. After covering those, the team would move on to fixing the new bugs found in the code. In the third stage, for example, as part of addressing some technical debt, the remaining old vulnerabilities would also be fixed.
This seemingly simple process had two serious limitations. Firstly, we needed a build to analyze Android applications (i.e. those written in Java). Secondly, for iOS, we needed MacOS machines to install our agent and run an environment that would allow us to build applications. With the Android apps, we worked things out quite easily: we simply wrote our own sections into the existing scripts, which also ran on a schedule. Our script sections would pre-launch the project build in the widest configuration, which would then be sent to DerScanner for analysis. In order to test iOS applications, we would install our MacOS agent on a Mac; it would build the code and also send it for analysis via GitLab CI. Then, as with other types of software, the security officer would review the analysis results, verify them, and create tasks for fixes in Jira.
We would also lump any projects written in Java together with the web portals and mobile apps; we would collect and analyze them according to a similar scheme.
For those projects that didn’t have CI/CD, which was a prerequisite for us, we would simply say: “Guys, if you want these analyzed, then build them manually and feed them to the scanner by yourselves. If you don’t have Java or JVM-like languages like Scala, Kotlin, etc., you can just upload the code to the repository via a link, and everything will be fine.”
As you can see from the above, the main problem with this application stack was the lack of CI/CD in many projects. Developers would often do the builds manually. We started integrating our analyzer with Sharepoint portals written in C#. By now, C# has more or less successfully migrated to Linux systems, although not quite full-fledged ones. When our project was in full swing, though, this language was still running on Windows, and we had to install a Windows agent for GitLab. This was a real challenge as our people were used to using Linux commands. Certain special solutions were dearly needed. In some cases, it was necessary to specify the full path to the *.exe file, in other cases not; sometimes, it was necessary to shield something, and so on. After implementing the integration with Sharepoint, the PHP mobile app project team told us they also didn’t have a runner and wanted to use the one in C#. So, we had to do the same all over again for them, too.
As a result, despite facing such a heterogeneous fleet of technologies, teams, and processes, we were able to group the main cases into several pipelines, automate their execution where appropriate, and implement them. In this case, we were able to make sure that:
– the solution we were implementing was mature and flexible enough to build DevSecOps processes in radically different deployment environments. The flexibility was achieved through a large set of built-in and custom integrations; without them, the effort needed for the implementation would increase greatly or even make it not possible at all;
– setting up the desired automation and the subsequent analysis of the results did not require too much effort, even with a huge scope of work. Coordination and building of the implemented processes and their full automation was possible through the efforts of a small expert group of 3–4 people;
– the implementation of DevSecOps automated code review tools and practices allowed to reveal flaws in current DevOps processes and became a reason to fine-tune, improve, unify and regulate them. The end result was a win-win situation for all parties involved in the process, from ordinary developers to top managers of engineering and IS departments.
Reminder: this is part one of a series about building a secure development process for a large retailer. In the next post, we will cover some details of the implementation of this project in the SAP family of applications.
Have you had any experience of your own with similar projects? We will be happy if you share your case studies on secure development practices. Please contact us: https://dersecur.com/contact-us/