From natural-language requirements to automated testing

Squad Mobility production series: From natural-language requirements to automated testing
December 30, 2025 by
From natural-language requirements to automated testing
Squad Mobility, Robert Hoevers


We hope you are enjoying the holiday season so far. As the New Year is coming soon, we want to share a new article to start 2026 with positive vibes. 

This time, our intern Bodgan and lead software engineer Kyriakos share the project they've been working on behind the scenes: automating unit testing. 

Enjoy the read!

Automating it all


Software unit testing has been an integral part of the automotive software development lifecycle for many years. Necessary, yes. Time consuming, immensely. 

Designing, implementing and running software test cases takes up a large portion of the total development efforts. So, what if we could automate all these steps?

Our work at SQUAD introduces an automated approach that generates unit tests directly from structured natural-language requirements significantly reducing manual effort while improving consistency, traceability, and test quality.


Unit testing in automotive


In a nutshell, unit testing is a way of checking the smallest parts of a computer program to make sure they work correctly, and to find if and where there is a bug in the program. This method isolates one small piece of code from the rest of the program and tests it independently. The results show whether this particular piece of code passes or fails the test, helping to find out where the bug is.

In automotive, software engineering is based on defining distinct software units, each having specified functions. This approach aims to improve maintainability of the code, find and fix bugs, reduce interface complexity and minimize unintended changes. Test cases are defined based on natural-language system requirements and the unit tests ensure that the functionality of the software units follows the specified requirements. The goal is to simulate the inputs of the software unit and compare the actual outputs with the expected ones. Once the test cases are completed, a report is generated, containing the status of all the simulated test cases (pass/fail) and many other metrics. 


Manual isn’t ideal for scaling


As modern vehicles contain millions of lines of code, software must be developed under strict quality and safety constraints. This makes unit testing essential and mandatory. The process, unfortunately, is still largely manual. Test cases are written and linked to requirements manually. Any change requires extensive test updates, and legacy tests and can quickly become outdated.

On average, creating a single unit test can take 15–20 minutes of engineering time. Now, imagine this, multiplied across hundreds or thousands of tests. It quickly becomes expensive and even a bottleneck for fast iteration.


The challenge: can we automate this?


If only we could automate test case generation directly from natural-language requirements. It would solve many, if not all the concerns we just mentioned, plus, reduce the time spent from minutes to mere seconds. So, can we do something about this manual business? The answer is: we have!  

We drew up a list of items that should be in place so that the system can send functional and consistent automatically generated test cases.

  • Concrete and unambiguous requirements with the strict structure defined in ISO 29148, the international standard for Requirements Engineering in Systems and Software Engineering
  • Proper naming conventions for both the software unit names, as well as the input signals. This way, matching natural language names to software ones is possible
  • Naming convention for distinguishing between different types of tests (I/O, performance, condition assertions, etc)
  • An interpreter, which will transform natural language to a format that can be usable by the computer
  • A program which will construct the test case code from the information that the interpreter of the requirements provides
  • A framework which executes unit testing from the generated test cases

This of course, is a diamond in the not-so-rough. Our work-in-progress enables our team to focus less on writing tests and more on building reliable, high-quality software, without compromising verification rigor.

While developing the framework requires an initial investment, the cost is quickly offset by the ability to generate and maintain large test suites automatically. As software evolves, the verification effort remains stable, predictable, and repeatable. 

An immense business value


Tangible business benefits quickly emerge alongside the many advantages of automated software unit testing we mentioned before.

  • Reduced development cost
    Less manual effort spent writing and maintaining tests

  • Faster iteration cycles
    Tests are regenerated automatically after software changes

  • Improved quality and consistency
    Every requirement is verified in a standardized way

  • Elimination of legacy test issues
    Tests stay up-to-date with the codebase at all times

  • Scalable verification
    The approach scales with the system size and its complexity


This joyful software development twist was made possible by working together as a team and always looking for ways to improve. Thanks to our intern Bogdan Neacsu for his incredibly innovative dissertation on the subject and our lead software engineer Kyriakos Kapetis for his guidance and never-ending enthusiasm!