Post preview
Request a Personalized DerScanner Demo

Code analysis: problems, solutions, prospects

Software vulnerabilities have always been and will always be one of the main gateways for attackers. That is why secure development has been trending for years — more and more vendors are focusing on identifying and eliminating vulnerabilities at the development stage. One of the main ways to target vulnerabilities and backdoors is code analysis.

 

Dan Chernov, DerSecur’s Chief Technology Officer, told us about the most popular technologies of code analysis, in what direction they are developing and which changes are expected in this domain in the nearest and distant future.

Dynamic, static, and binary analysis

Today, there are two most mature and popular technologies of code analysis: Dynamic Application Security Testing (DAST) and Static Application Security Testing (SAST). A variation of SAST is binary analysis.

 

Dynamic analysis is also referred to as the “Black Box” method — it checks the program for vulnerabilities at run-time. This method has its advantages. Firstly, since the vulnerabilities are examined in the executable program and the bug is detected by exploiting it, there are fewer false positives as compared to static analysis. In addition, this type of analysis does not require source code. But there are weaknesses, too. In particular, this method is unable to detect all possible vulnerabilities, and some of them may be missed. Not all vulnerabilities can be detected using this method of testing. For example, a time bomb or hardcoded credentials. In addition, the method requires reproducing the production environment as accurately as possible during testing.

Static analysis — the “White Box” method — is a type of testing where, as opposed to dynamic analysis, the program is not executed, but its whole code is analyzed. As a result, we can detect more vulnerabilities. The advantage of the method is that we can implement it in the initial stages of development: the earlier we detect a vulnerability, the cheaper it is to fix it.

 

The method has two drawbacks. The first one is the presence of false positives. As a consequence, the need arises to assess whether we have detected a real vulnerability, or it is just the scanner error. The second drawback is that we need the program source code which is not always available. In the latter case, binary analysis helps. We can use it with the reverse engineering technology to conduct a static analysis of the code, even if we do not have the source code. In real life this is often the only way to competently and fully identify vulnerabilities in applications.

 

For example, we need to make sure that the source code we checked is the one that will be used in production later. In addition to situations where someone deliberately leaves backdoors in the code, a number of vulnerabilities can also be added by the compiler. And this is by no means an exotic situation: the compiler is also written by people, and it is not immune to errors. Basically, we need to take an executable file from the production version and check it for vulnerabilities.

 

Another option is to run the application through a dynamic analyzer: we have a black box in front of us, we can’t open it, but we can manipulate it — “kick it, lift, shake, drop.” And draw conclusions from the results. In dynamic analysis, different data is fed into the application, different sequences are entered into the text fields, and if it is a web site, a variety of commands, protocols, and packets are sent. And from the application’s response, we conclude if there are vulnerabilities in it. DAST is really good for web applications because you can also manipulate a system protected by a firewall.

 

However, DAST only helps to detect a certain range of vulnerabilities. For example, threats like a trigger-based time bomb that will launch a malware in the system, a hidden account, an insecure password and so on cannot be detected by dynamic analysis. Binary analysis, on the other hand, makes a black box into a white one so that you can view its contents during static analysis.

 

Sometimes, the source code of the developed system may be lost, while the system is being actively used and cannot be abandoned immediately. Binary analysis will help check this system for vulnerabilities so that prompt protection measures can be taken.

True or False: how to solve the main problem of automated code analysis?

False positives are an issue with any system that automatically analyzes something and outputs a result. Obviously, it is also faced by code analyzers. The more false positives, the more expensive it is for the client to use the tool. After all, verification requires human resources and time.

 

Currently, there are several ways of technology development to minimize false positives. One of the hottest is the use of artificial intelligence, or more precisely, its subset, machine learning (ML). How can we scan the code for vulnerabilities with ML?

 

First, you can try to train the ML analyzer manually: write lots of examples of right and wrong code and train the analyzer to detect such errors. The disadvantage of this approach is that we may have to spend a lot of time preparing suitable samples to train the analyzer.

 

Another way is to mark the code of real-time applications with fragments the analyzer should generate error warnings for. In any case, a lot of work will have to be done, since we need tens of thousands of samples of code errors for training. If we consider that there are vulnerabilities (e.g., memory leaks) that can be contained in the code written in an almost infinite number of variations, the amount of samples required for training makes the task virtually impossible.

 

Second, there is no need to prepare examples manually; we can train an ML analyzer based on large amounts of open source code. You can track the history of commits on GitHub and identify patterns of changes or fixes to program code. However, the problem is that edits on GitHub are made rather chaotically: for example, the user does not want to spend time making each edit individually and makes a couple of changes somewhere, while somewhere the user just rewrites a piece of code. Then, even a human would not be able to figure out if these corrected errors are related to each other.

 

You can, of course, hire a small army of specialists who will check the source code and point out where the error is fixed and where the code is rewritten, where a new feature is added and where the requirements have changed. This means, in fact, going back to manual training. Not to mention the increase in the cost and duration of the task. Or you can try to automatically detect errors in open source code: you can try to program such a search, which will not be easy, and most important is that there will be issues related to the quality of this analysis.

So, in general, using AI to analyze code is a very time-consuming and expensive enterprise with an unobvious result.

 

But there is another method for preventing false positives — mathematical.

Fuzzy logic: how it works

Fuzzy logic is a relatively new branch of mathematics. It is a generalization of classical logic and set theory and is based on the concept of a fuzzy set, first introduced in 1965. Whereas in ordinary logic we have true or false, 0 or 1, fuzzy logic operates with gradations between true and false. Fuzzy logic can state that an event is true by some percentage and wrong by some percentage which is closer to human thinking.

 

Imagine you have a glass of water and you have to state that the water is cold if its temperature is below 15 degrees and warm if it is above 15 degrees. We dip our finger into the glass (let’s say its temperature is 16 degrees, but we don’t know that). And we’re not going to say, “Yes, it’s definitely warm.” At threshold values, we will internally hesitate: kind of warm, but rather cold. Fuzzy logic helps us get away from linear thinking.

 

In normal linear thinking, when a system has been trained to decide that something is true or false, when setting a mathematical threshold (let’s say we want fewer false positives), we have to set rigid decision filters for the machine. Then it will begin to output fewer false positives, but it will be missing real vulnerabilities. If we move the line filter in the opposite direction, the machine will stop missing vulnerabilities, but there will be many false positives.

 

A vulnerability mitigation mechanism based on the mathematical apparatus of fuzzy logic being implemented in code scanner allows fine-tuning linear filters balancing between the reduction of false positives and loss of accuracy in identifying vulnerabilities. We can use filters, for example, to display only those vulnerabilities in which fuzzy logic mechanism is fully confident. In addition, we can fine-tune the level of confidence when a vulnerability is detected. The Confidence scale, for example, can be used to set more stringent criteria for critical-level vulnerabilities, by which the system will classify potential vulnerabilities as real vulnerabilities. For medium- and low-level vulnerabilities, less stringent assessment criteria can be set.

 

The more languages, the better

There are more and more programming languages, and many of the new ones are quickly gaining popularity. For example, the Rust language, which is positioned as a safe replacement for C++ and today is often used to write desktop applications and backend. Or Golang, which is used to create high load services — online trading platforms, RBS, messengers, etc. Dart, designed for writing web and mobile clients.

 

And it often happens that some part of the system is written in a language that some tools cannot analyze. Therefore, if we want code analysis to be efficient and user-friendly, it is important to have all languages on board. The user should not wonder in what languages the application is written and whether the analyzer supports all languages. The code must be loaded into the analyzer, which itself will determine the application languages and check them all without missing anything.

 

Technologies for the future

So what’s beyond the horizon? What technologies can be implemented in code analyzers in the nearest future?

 

There is a growing demand for binary analysis of complex systems in the code analysis market. Let’s say a company buys software from a third-party vendor and installs it in its critical information infrastructure, and IS specialists want to assess the real security level of a given system. How is this different from existing technology? You can use modern binary analysis to load into the analyzer and analyze only each executable file individually. In case of a complex multi-component system, you need to send a wide range of different files related to each other to the analyzer — installation files, ready-made system files, libraries, executable files, etc. These are not just files, but a single piece of software made up of different logical components. An extremely complex analysis is required here which is very difficult to implement. For now, such a feature is exotic.

 

 

The prospect of integrated systems that take advantage of all advanced analysis methods — DAST, SAST, IAST, mAST, SCA (Software Composition Analysis) — looks very realistic. SCA is an inventory of libraries, code provided for analysis, and its verification against a database that includes a list of libraries and the vulnerabilities they contain. However, so far there is no vendor to bring all these components together, and the reports provided by each system are so different that it is impossible to bring them on the same page. That is why the creation of an integrated system to combine the capabilities of different types of code analysis with a single reporting system looks promising.

 

It is worth to mention another interesting technology for securing applications from vulnerabilities — RASP (Runtime Application Self-Protection). Gartner once wrote it was promising, but the technology of software self-protection does not seem to be efficient for the time being. Some vendors have already tried to implement it, but it only works in particular cases and is not applicable to all programming languages. Its idea is to add code to the protected software itself so that the application can understand when it is being attacked and block the attack. The problem is that the architectures of web applications are very different, so this approach to protection is applicable either in individual development for each application and each new version of it, or if the RASP system allows very deep customization. To talk about RASP as a full-fledged replacement for code analysis of web applications, certain steps should be made in the development of technology in general and artificial intelligence in particular.

 

P. S. But what if...?

And what if a quantum computer is created and the era of quantum computing comes? How can we then protect ourselves, for example, against vulnerabilities that can be exploited by brute force? All these insufficient encryption key sizes, weak hashing algorithms, salt set in the source code, etc.? If such technologies come into mass use, everything that is now considered secure — complex passwords, encryption algorithms — will be cracked on the fly. Obviously, it will be necessary to develop completely different approaches to both static code analysis and dynamic code analysis. The prospects are yet unknown to us but very exciting.

Request a Personalized DerScanner Demo
preview
DerSecur Recognized among Notable Vendors in The Software Composition Analysis Landscape Q2 2024
2024-06-24
preview
DerScanner Participates in Delphi Day Italy to Support Local Developer Community
2024-06-21
preview
DerScanner Expands its Application Security Testing Platform to 43 Programming Languages and Improves Open Source Security
2024-03-11