Optimizing Laravel Tests: Cutting Test Time by 87% in a Multi-Database Scenario

Testing

Testing is the foundation of high-quality development, regardless of the type of test, as different scenarios call for different testing solutions. Laravel offers a robust set of testing features, which we have leveraged extensively. However, a common issue that arises sooner or later is the growing number of tests, leading to increased test duration. Even if each test runs quickly, their cumulative runtime in sequential execution can become substantial.

Problem: Slow Sequential Tests

So, what can be done about this? The solution lies in running tests in parallel, a method well-documented in Laravel’s official documentation. This solution is straightforward… provided the application isn’t particularly complex. However, our case was different: we had three concurrent database connections, utilized two different database engines (a mix of SQL and NoSQL), had a multi-tenant service, substantial legacy code, and tests that heavily relied on predefined tenant data, leading to multiple tests often using the same data. Laravel’s standard solution wasn’t prepared for this level of complexity.

Even sequential execution of such tests was challenging, but using transactions, specifically the DatabaseTransactions trait, allowed us to mitigate potential issues. After each test, changes were rolled back, leaving the database “clean” and ready for the next test. Despite this, the tests still took a long time to run.

Initially, we considered refactoring the tests to use random data, thereby avoiding data collisions and allowing parallel execution without the risk of tests interfering with each other. Unfortunately, preliminary analyses revealed that this refactoring would be very time-consuming and costly, and since it doesn’t directly deliver functionality to the client, we needed another solution.

Solution: Custom Databases Handling

And we found one. By extending our database system, we could add some additional operations to be executed in each parallel process. For clarity: the native solution automatically creates multiple processes, each with its own database. This works well if there’s only one connection. Our solution involved adding support for all the other connections we used. Here are the necessary operations:

  1. For each process, create additional databases for connections B and C.
  2. If required, perform migrations for connections B and C.
  3. After the tests (or at the beginning of new ones), clean up the data and databases so they can be reused by the next process.

Here’s the actual implementation, first, create new provider:

php artisan make:provider DatabaseServiceProvider

Then edit new provider. This code is only example with some comments – it should be adjusted to specific scenario. Also, some polishing like using loops for connections is possible, but it depends on used databases:

declare(strict_types=1);

namespace App\Providers;

use Illuminate\Support\Facades\Artisan;
use Illuminate\Support\Facades\Config;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\ParallelTesting;
use Illuminate\Support\Facades\Schema;
use Illuminate\Support\ServiceProvider;
use Illuminate\Testing\Concerns\TestDatabases;

class DatabaseServiceProvider extends ServiceProvider
{
    use TestDatabases;

    // This was required as main connection to avoid failure if there is no existing DB
    // Depends on driver, for example MySQL allows to connect even if there is no DB
    private static string $MAIN_CONNECTION = 'your-base-connection';

    public function boot(): void
    {
        if ($this->app->runningInConsole()) {
            $this->bootTestDatabase();
        }
    }

    protected function bootTestDatabase(): void
    {
        ParallelTesting::setUpProcess(function () {
            // We can skip for in memory (sqlite) - it's only for simple unit tests
            $this->whenNotUsingInMemoryDatabase(function ($database) {
                if (ParallelTesting::option('recreate_databases')) {
                    $this->cleanNoSQLConnection();
                    $this->dropTenantDb();
                }
            });
        });

        ParallelTesting::tearDownProcess(function () {
            $this->whenNotUsingInMemoryDatabase(function ($database) {
                if (ParallelTesting::option('drop_databases')) {
                    $this->cleanNoSQLConnection();
                    $this->dropTenantDb();
                }
            });
        });

        ParallelTesting::setUpTestCase(function () {
            $this->whenNotUsingInMemoryDatabase(function ($database) {
                DB::purge();

                // Switch to proper databases based on current process token
                $token = ParallelTesting::token();
                $masterDb = empty($token) ? 'test_main' : 'test_main_test_' . $token;

                config()->set(
                    'database.connections.main.database',
                    $masterDb,
                );
                config()->set(
                    'database.connections.tenant.database',
                    $this->getCurrentTenantDb(),
                );
                config()->set(
                    'database.connections.nosql.database',
                    $this->getCurrentNoSQLDb(),
                );
            });
        });


        ParallelTesting::setUpTestDatabase(function (string $database, int $token) {
            $this->whenNotUsingInMemoryDatabase(function ($database) {
                $this->cleanNoSQLConnection();
                $this->dropTenantDb();

                // Crete all required DBs for tests and migrate them
                $mainDB = $this->getCurrentTenantDb();
                Schema::connection(self::$MAIN_CONNECTION)->createDatabase($mainDB);
                Config::set('database.connections.tenant.database', $mainDB);

                $this->migrateDatabases();
            });
        });
    }

    protected function getCurrentTenantDb(): string
    {
        $token = ParallelTesting::token();
        if (!empty($token)) {
            return 'test_tenant_' . $token;
        } else {
            return 'test_tenant';
        }
    }

    protected function getCurrentNoSQLDb(): string
    {
        $token = ParallelTesting::token();
        if (!empty($token)) {
            return 'test_nosql_test_' . $token;
        } else {
            return 'test_nosql';
        }
    }

    protected function cleanNoSQLConnection(): void
    {
        $dbName = $this->getCurrentNoSQLDb();
        Config::set('database.connections.nosql.database', $dbName);
        // Implementation: cleanup on no sql db
    }

    protected function dropTenantDb(): void
    {
        // Implementation: based on DB engine
    }

    protected function migrateDatabases(): void
    {
        // Only example
        Artisan::call('migrate:fresh', [
            '--database' => 'main',
            '--path' => database_path('migrations/main'),
            '--realpath' => true,
            '--schema-path' => $this->schemaPath('main'),
        ]);

        Artisan::call('migrate:fresh', [
            '--database' => 'tenant',
            '--path' => database_path('migrations/tenant'),
            '--realpath' => true,
            '--schema-path' => $this->schemaPath('tenant'),
        ]);
    }

    protected function schemaPath(string $conn): string
    {
        return database_path('schema/test-' . $conn . '-' . '-schema.sql');
    }
}

Finally add new provider to bootstap/providers.php list:

App\Providers\DatabaseServiceProvider::class

The result? Test time dropped from nearly 24 minutes to just 3! This was a huge improvement, noticeable both in our CI/CD pipelines and during local development.

We quickly realized we could further speed up operations by using the migration squash mechanism. Instead of executing numerous migrations each time, we simply ran SQL queries based on a pre-prepared, clean database dump – a SQL file created after all migrations. It’s already added in code presented above. The result? We shaved off another minute from the test time. While it might not seem spectacular, percentage-wise, it’s a significant improvement.

Huge Win & Moral

Of course, the tests still need improvements, but they now run in isolation without impacting each other. This means they can be modified and improved gradually, without rushing and without delaying the delivery of new features. The significantly shorter test duration offers several advantages:

  • Reduced infrastructure costs due to shorter process runtimes.
  • Faster code delivery as developers wait less for feedback from pipelines.
  • Reduced delays during local development, positively affecting the entire software creation and delivery process.

The moral of this story? If you face a problem that seems incredibly complex, if the solution appears to be highly demanding and tedious… Step aside, focus on something else for a moment. Clear your mind and allow new, creative ideas to surface, helping you bypass the obstacle instead of dismantling it brick by brick. This approach works, as long as we give ourselves a moment for creativity.

Laravel & Unconventional Database – Why You Should Avoid That

Choosing the right database for IT projects is no easy task — it depends on business requirements, but also on limitations introduced by costs, legal issues, and even the ability to find the right technical support. In this blog post, based on our experiences at Ingenious.BUILD, I’d like to discuss the challenges of using an unconventional database like CockroachDB in a project based on Laravel, among other platforms. These challenges can also arise when our API uses completely different solutions.

Continue reading “Laravel & Unconventional Database – Why You Should Avoid That”

Minikube & Hyper-V: Fix Start Host Error

Recently, I’ve been struggling with a Minikube issue on Windows 11 with Hyper-V enabled and decided to share a quick note about it. Many people opt for VirtualBox, but I believe Hyper-V is a superior option because it’s a higher-level hypervisor that offers better performance. It’s also readily available on Windows without the need to install additional tools.

However, it’s not without its flaws, and this issue serves as a prime example. I had installed Minikube and kubectl, then started the cluster. After conducting tests and stopping it, I was unable to restart it the following day. Minikube only reported:

Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: recreate: creating host: create: creating: exit status 1

There was also recommendation to use minikube delete to fix issue, but it did not work. The problem was in Hyper-V, not in Minikube: cluster was not enable to start because of broken cache. Fortunately solution is very simple:

  1. Use minikube delete to remove leftovers
  2. Use services.msc to disable all Hyper-V services
  3. Navigate to C:\ProgramData\Microsoft\Windows\Hyper-V\Virtual Machines Cache and remove all cache
  4. Restart computer or just re-enable Hyper-V services

After these steps, minikube start will work again and will create cluster for you without issues.

Laravel: Repository Pattern

Laravel and Eloquent offer a straightforward and powerful approach to database interaction, enabling easy data retrieval, saving, updating, and deletion from various points in an application. This simplicity, while beneficial, brings with it a significant challenge: it becomes all too easy to produce code that is tightly coupled and disorganized. This issue is particularly evident when examining the structure of a basic controller, where the convenience of direct database operations can inadvertently lead to a lack of separation of concerns and an increase in code complexity. Fortunately, there is a good solution for that.

Continue reading “Laravel: Repository Pattern”