Supabase Part 1: Securing Direct Database Queries

This is the first article in a series on Supabase, focusing on solutions to common problems, tips, and recommendations.

When working with serverless platforms, it's common to use a custom backend alongside edge functions. Supabase is no exception. Unless you're building a small app or prototype, you'll likely run into issues sooner or later, whether due to scaling/performance limitations or missing features. Since the Supabase client relies on PostgREST techniques for authorization, you’ll need to implement a similar logic yourself to secure your queries.

If you’re using Kysely as your query builder (which is excellent, by the way), here’s a simplified method that does the job effectively:

// executteWithRLS.ts
import type { Kysely, Transaction, sql } from "kysely";
interface RLSData {
    // set claims same as supabase client's and override if necessary
    claims: Record<string, unknown>;
}
export async function executeWithRLS<T>(
  kyselyClient: Kysely, 
  authData: RLSData,
  callback: (trx: Transaction) => Promise<T>
) {
  return await kyselyClient.transaction().execute(async(trx) => {
    // set transaction level auth variables and run transaction
    await sql`
      SET LOCAL request.jwt.claims = ${sql.lit(JSON.stringify(authData.claims))};
      SET LOCAL ROLE authenticated;
    `.execute(trx);
    return await callback(trx);
  });
}

// example-service.ts
const db = DI.get("kyselyClient"); // somehow retrieve a singleton db client
const authData = DI.get("user"); // somehow retrieve user session from request (e.g. expressjs request.user)
const data = await executeWithRLS(db, authData, async(trx) => {
    const newRecord = await trx.insertInto("...").values({}).execute();
    const data = trx.selectFrom("...").where("...","=","...").execute();
    return data;
});

While the example code is simplified, this is essentially how it works. Once you've validated the request with the Supabase access token, you can construct an authData variable with the required claims and call executeWithRLS whenever you need to query Postgres. This ensures that Row-Level Security (RLS) policies will function the same way as with PostgREST/Supabase client queries. Functions like auth.uid() will now retrieve user data from the claims you’ve set. It's important to note that SET LOCAL commands affect only the current transaction, which is exactly what you need. This also allows you to run admin queries that bypass policies using the same database client/connection without risking data leaks.

You might hear some people recommending different approaches, like setting connection-level parameters or issuing SET commands for each connection and reserving them per user. These are poor practices for both security and performance. Direct PostgreSQL connections are expensive and should be pooled for all clients. Supabase already handles this automatically with their pooler called "Supavisor" which has two modes: session and transaction (the default). When connecting to your database outside the Supabase client, you have three options.

  1. If you choose the first two options (session mode or direct connections), you’ll quickly exhaust your connection limit, which will degrade application performance.

  2. If you use transaction mode (recommended for scaling), you'll introduce data races and security vulnerabilities. Users may inadvertently access each other’s data because transaction mode runs multiple queries concurrently on the same pooled connections. Any connection-level changes will affect all queries sharing that connection. This is why transaction-level roles/claims are important for ensuring both safety and efficiency.

💡
Using transaction-level variables allows you to customize your queries in various ways. For example, you can add SET LOCAL statement_timeout = '10s' to adjust Postgres' maximum execution time on a per-query basis. I recommend setting a small timeout by default to prevent the database from becoming overloaded by faulty or malicious queries.
There is a function called login_as_user in the "auth" schema. While it may seem tempting to use it for authorization, it's important to understand that this is not its intended purpose. The function is designed solely for policy testing. In fact, Supabase dashboard uses it for user impersonation within the SQL editor. It suffers from the same issues mentioned earlier, so it must be avoided.
If you're connecting to your Postgres instance through programs like pgAdmin, DBeaver, etc., ensure you're using port 5432 (session mode) for the reasons mentioned above. Your manual connection should be isolated from transaction mode to avoid accidentally interfering with application queries and breaking RLS.

Conclusion

When using a custom backend alongside edge functions and client-side queries, always double-check that everything you're doing is secure on all fronts. Running queries with the correct roles and privileges is just as important as securing them with RLS policies. Also, ensure that you fully understand the connection pooling modes.

Note that everything described in this article also applies to edge functions if you use a database client within them. While this setup is very useful, be aware that you can’t restrict database access to whitelisted IP addresses in this case, as edge functions have dynamic IPs. Querying with the Supabase JS client is not an issue, because it bypasses IP restrictions.

In the next article, I’ll discuss how to write efficient RLS policies.

Did you find this article valuable?

Support Arda Beyazoğlu by becoming a sponsor. Any amount is appreciated!