Platform

Backup and Restore using the CLI

Learn how to backup and restore projects using the Supabase CLI


Backup database using the CLI#

1

Install the Supabase CLI

Install the Supabase CLI.

2

Install Docker Desktop

Install Docker Desktop for your platform.

3

Get the new database connection string

On your project dashboard, click Connect.

Session pooler connection string:

1
postgresql://postgres.[PROJECT-REF]:[YOUR-PASSWORD]@aws-0-us-east-1.pooler.supabase.com:5432/postgres

Direct connection string:

1
postgresql://postgres.[PROJECT-REF]:[YOUR-PASSWORD]@db.[PROJECT-REF].supabase.com:5432/postgres
4

Get the database password

Reset the password in the Database Settings.

Replace [YOUR-PASSWORD] in the connection string with the database password.

5

Backup database

Run these commands after replacing [CONNECTION_STRING] with your connection string from the previous steps:

1
supabase db dump --db-url [CONNECTION_STRING] -f roles.sql --role-only
1
supabase db dump --db-url [CONNECTION_STRING] -f schema.sql
1
supabase db dump --db-url [CONNECTION_STRING] -f data.sql --use-copy --data-only

Before you begin#

Restore backup using CLI#

1

Create project

Create a new project

2

Configure newly created project

In the new project:

  • If Webhooks were used in the old database, enable Database Webhooks.
  • If any non-default extensions were used in the old database, enable the Extensions.
  • If Replication for Realtime was used in the old database, enable Publication on the tables necessary
3

Get the new database connection string

Go to the project page and click the "Connect" button at the top of the page for the connection string.

Session pooler connection string:

1
postgresql://postgres.[PROJECT-REF]:[YOUR-PASSWORD]@aws-0-us-east-1.pooler.supabase.com:5432/postgres

Direct connection string:

1
postgresql://postgres.[PROJECT-REF]:[YOUR-PASSWORD]@db.[PROJECT-REF].supabase.com:5432/postgres
4

Get the database password

Reset the password in the project connect page.

Replace [YOUR-PASSWORD] in the connection string with the database password.

5

Restore your Project with the CLI

Run these commands after replacing [CONNECTION_STRING] with your connection string from the previous steps:

1
psql \
2
--single-transaction \
3
--variable ON_ERROR_STOP=1 \
4
--file roles.sql \
5
--file schema.sql \
6
--command 'SET session_replication_role = replica' \
7
--file data.sql \
8
--dbname [CONNECTION_STRING]

Important project restoration notes#

Troubleshooting notes#

  • Setting the session_replication_role to replica disables all triggers so that columns are not double encrypted.
  • If you have created any custom roles with login attribute, you have to manually set their passwords in the new project.
  • If you run into any permission errors related to supabase_admin during restore, edit the schema.sql file and comment out any lines containing ALTER ... OWNER TO "supabase_admin".

Preserving migration history#

If you were using Supabase CLI for managing migrations on your old database and would like to preserve the migration history in your newly restored project, you need to insert the migration records separately using the following commands.

1
supabase db dump --db-url "$OLD_DB_URL" -f history_schema.sql --schema supabase_migrations
2
supabase db dump --db-url "$OLD_DB_URL" -f history_data.sql --use-copy --data-only --schema supabase_migrations
3
psql \
4
--single-transaction \
5
--variable ON_ERROR_STOP=1 \
6
--file history_schema.sql \
7
--file history_data.sql \
8
--dbname "$NEW_DB_URL"

Schema changes to auth and storage#

If you have modified the auth and storage schemas in your old project, such as adding triggers or Row Level Security(RLS) policies, you have to restore them separately. The Supabase CLI can help you diff the changes to these schemas using the following commands.

1
supabase link --project-ref "$OLD_PROJECT_REF"
2
supabase db diff --linked --schema auth,storage > changes.sql

Migrate storage objects#

The new project has the old project's Storage buckets, but the Storage objects need to be migrated manually. Use this script to move storage objects from one project to another.

1
// npm install @supabase/supabase-js@2
2
const { createClient } = require('@supabase/supabase-js')
3
4
const OLD_PROJECT_URL = 'https://xxx.supabase.co'
5
const OLD_PROJECT_SERVICE_KEY = 'old-project-service-key-xxx'
6
7
const NEW_PROJECT_URL = 'https://yyy.supabase.co'
8
const NEW_PROJECT_SERVICE_KEY = 'new-project-service-key-yyy'
9
10
const oldSupabase = createClient(OLD_PROJECT_URL, OLD_PROJECT_SERVICE_KEY)
11
const newSupabase = createClient(NEW_PROJECT_URL, NEW_PROJECT_SERVICE_KEY)
12
13
function createLoadingAnimation(message) {
14
const readline = require('readline')
15
const frames = ['ā ‹', 'ā ™', 'ā ¹', 'ā ø', 'ā ¼', 'ā “', 'ā ¦', 'ā §', 'ā ‡', 'ā ']
16
let i = 0
17
let timer
18
let stopped = false
19
20
const animate = () => {
21
if (stopped) return
22
process.stdout.write(`\r${frames[i]} ${message}`)
23
i = (i + 1) % frames.length
24
timer = setTimeout(animate, 80)
25
}
26
27
animate()
28
29
return {
30
stop: (finalMessage = '') => {
31
stopped = true
32
clearTimeout(timer)
33
readline.clearLine(process.stdout, 0)
34
readline.cursorTo(process.stdout, 0)
35
process.stdout.write(`āœ“ ${finalMessage || message}\n`)
36
},
37
}
38
}
39
40
/**
41
* Lists all files in a bucket, handling nested folders recursively.
42
*/
43
async function listAllFiles(bucket, path = '') {
44
const loader = createLoadingAnimation(`Listing files in '${bucket}${path ? '/' + path : ''}'...`)
45
46
try {
47
const { data, error } = await oldSupabase.storage.from(bucket).list(path, { limit: 1000 })
48
if (error) {
49
loader.stop(`Error listing files in '${bucket}${path ? '/' + path : ''}'`)
50
throw new Error(`āŒ Error listing files in bucket '${bucket}': ${error.message}`)
51
}
52
53
if (!data || data.length === 0) {
54
loader.stop(`No files found in '${bucket}${path ? '/' + path : ''}'`)
55
return []
56
}
57
58
let files = []
59
for (const item of data) {
60
if (!item.metadata) {
61
loader.stop(`Found folder '${item.name}' in '${bucket}${path ? '/' + path : ''}'`)
62
const subFiles = await listAllFiles(bucket, `${path}${item.name}/`)
63
files = files.concat(subFiles)
64
} else {
65
files.push({ fullPath: `${path}${item.name}`, metadata: item.metadata })
66
}
67
}
68
69
loader.stop(`Found ${files.length} files in '${bucket}${path ? '/' + path : ''}'`)
70
return files
71
} catch (error) {
72
loader.stop()
73
throw error
74
}
75
}
76
77
/**
78
* Creates a bucket in the new Supabase project if it doesn't exist.
79
*/
80
async function ensureBucketExists(bucketName, options = {}) {
81
const { data: existingBucket, error: getBucketError } =
82
await newSupabase.storage.getBucket(bucketName)
83
84
if (getBucketError && !getBucketError.message.includes('not found')) {
85
throw new Error(`āŒ Error checking if bucket '${bucketName}' exists: ${getBucketError.message}`)
86
}
87
88
if (!existingBucket) {
89
console.log(`🪣 Creating bucket '${bucketName}' in new project...`)
90
const { error } = await newSupabase.storage.createBucket(bucketName, options)
91
if (error) throw new Error(`āŒ Failed to create bucket '${bucketName}': ${error.message}`)
92
console.log(`āœ… Created bucket '${bucketName}'`)
93
} else {
94
console.log(`ā„¹ļø Bucket '${bucketName}' already exists in new project`)
95
}
96
}
97
98
/**
99
* Migrates a single file from the old project to the new one.
100
*/
101
async function migrateFile(sourceBucketName, targetBucketName, file) {
102
const loader = createLoadingAnimation(
103
`Migrating ${file.fullPath} in bucket '${sourceBucketName}' to '${targetBucketName}'...`
104
)
105
106
try {
107
const { data, error: downloadError } = await oldSupabase.storage
108
.from(sourceBucketName)
109
.download(file.fullPath)
110
if (downloadError) {
111
loader.stop(`Failed to migrate ${file.fullPath}: Download error`)
112
throw new Error(`Download failed: ${downloadError.message}`)
113
}
114
115
// Preserve all available metadata from the original file
116
const uploadOptions = {
117
upsert: true,
118
contentType: file.metadata?.mimetype,
119
cacheControl: file.metadata?.cacheControl,
120
}
121
122
const { error: uploadError } = await newSupabase.storage
123
.from(targetBucketName)
124
.upload(file.fullPath, data, uploadOptions)
125
if (uploadError) {
126
loader.stop(`Failed to migrate ${file.fullPath}: Upload error`)
127
throw new Error(`Upload failed: ${uploadError.message}`)
128
}
129
130
loader.stop(
131
`Migrated ${file.fullPath} in bucket '${sourceBucketName}' to '${targetBucketName}'`
132
)
133
return { success: true, path: file.fullPath }
134
} catch (err) {
135
console.error(
136
`āŒ Error migrating ${file.fullPath} in bucket '${targetBucketName}':`,
137
err.message
138
)
139
return { success: false, path: file.fullPath, error: err.message }
140
}
141
}
142
143
function chunkArray(array, size) {
144
const chunks = []
145
for (let i = 0; i < array.length; i += size) {
146
chunks.push(array.slice(i, i + size))
147
}
148
return chunks
149
}
150
151
/**
152
* Migrates all buckets and files from the old Supabase project to the new one.
153
* Processes files in parallel within batches for efficiency.
154
*/
155
async function migrateBuckets() {
156
console.log('šŸ”„ Starting Supabase Storage migration...')
157
console.log(`šŸ“¦ Source project: ${OLD_PROJECT_URL}`)
158
console.log(`šŸ“¦ Target project: ${NEW_PROJECT_URL}`)
159
160
const readline = require('readline').createInterface({
161
input: process.stdin,
162
output: process.stdout,
163
})
164
165
console.log(
166
'\nāš ļø WARNING: This migration may overwrite files in the target project if they have the same paths.'
167
)
168
console.log('āš ļø It is recommended to back up your target project before proceeding.')
169
170
const answer = await new Promise((resolve) => {
171
readline.question('Do you want to proceed with the migration? (yes/no): ', resolve)
172
})
173
174
readline.close()
175
176
if (answer.toLowerCase() !== 'yes') {
177
console.log('Migration canceled by user.')
178
return { canceled: true }
179
}
180
181
console.log('\nšŸ“¦ Fetching all buckets from old project...')
182
183
const { data: oldBuckets, error: bucketListError } = await oldSupabase.storage.listBuckets()
184
185
if (bucketListError) throw new Error(`āŒ Error fetching buckets: ${bucketListError.message}`)
186
console.log(`āœ… Found ${oldBuckets.length} buckets to migrate.`)
187
188
const { data: existingBuckets, error: existingBucketsError } =
189
await newSupabase.storage.listBuckets()
190
if (existingBucketsError)
191
throw new Error(`āŒ Error fetching existing buckets: ${existingBucketsError.message}`)
192
193
const existingBucketNames = existingBuckets.map((b) => b.name)
194
const conflictingBuckets = oldBuckets.filter((b) => existingBucketNames.includes(b.name))
195
196
let conflictStrategy = 2
197
198
if (conflictingBuckets.length > 0) {
199
console.log('\nāš ļø The following buckets already exist in the target project:')
200
conflictingBuckets.forEach((b) => console.log(` - ${b.name}`))
201
202
const conflictAnswer = await new Promise((resolve) => {
203
const rl = require('readline').createInterface({
204
input: process.stdin,
205
output: process.stdout,
206
})
207
rl.question(
208
'\nHow do you want to handle existing buckets?\n' +
209
'1. Skip existing buckets\n' +
210
'2. Merge files (may overwrite existing files)\n' +
211
'3. Rename buckets in target (add suffix "_migrated")\n' +
212
'4. Cancel migration\n' +
213
'Enter your choice (1-4): ',
214
(answer) => {
215
rl.close()
216
resolve(answer)
217
}
218
)
219
})
220
221
if (conflictAnswer === '4') {
222
console.log('Migration canceled by user.')
223
return { canceled: true }
224
}
225
226
conflictStrategy = parseInt(conflictAnswer)
227
if (isNaN(conflictStrategy) || conflictStrategy < 1 || conflictStrategy > 3) {
228
console.log('Invalid choice. Migration canceled.')
229
return { canceled: true }
230
}
231
}
232
233
const migrationStats = {
234
totalBuckets: oldBuckets.length,
235
processedBuckets: 0,
236
skippedBuckets: 0,
237
totalFiles: 0,
238
successfulFiles: 0,
239
failedFiles: 0,
240
failedFilesList: [],
241
}
242
243
for (const bucket of oldBuckets) {
244
const bucketName = bucket.name
245
console.log(`\nšŸ“ Processing bucket: ${bucketName}`)
246
247
let targetBucketName = bucketName
248
249
if (existingBucketNames.includes(bucketName)) {
250
if (conflictStrategy === 1) {
251
console.log(`ā© Skipping bucket '${bucketName}' as it already exists in target project`)
252
migrationStats.skippedBuckets++
253
continue
254
} else if (conflictStrategy === 3) {
255
targetBucketName = `${bucketName}_migrated`
256
console.log(`šŸ”„ Renaming bucket to '${targetBucketName}' in target project`)
257
} else {
258
console.log(`šŸ”„ Merging files into existing bucket '${bucketName}' in target project`)
259
}
260
}
261
262
// Preserve bucket configuration when creating in the new project
263
if (targetBucketName !== bucketName || !existingBucketNames.includes(bucketName)) {
264
await ensureBucketExists(targetBucketName, {
265
public: bucket.public,
266
fileSizeLimit: bucket.file_size_limit,
267
allowedMimeTypes: bucket.allowed_mime_types,
268
})
269
}
270
271
const files = await listAllFiles(bucketName)
272
console.log(`āœ… Found ${files.length} files in bucket '${bucketName}'.`)
273
migrationStats.totalFiles += files.length
274
275
const batches = chunkArray(files, 10)
276
277
for (let i = 0; i < batches.length; i++) {
278
console.log(`\nšŸš€ Processing batch ${i + 1}/${batches.length} (${batches[i].length} files)`)
279
280
const results = await Promise.all(
281
batches[i].map((file) => migrateFile(bucketName, targetBucketName, file))
282
)
283
284
const batchSuccesses = results.filter((r) => r.success).length
285
const batchFailures = results.filter((r) => !r.success)
286
287
migrationStats.successfulFiles += batchSuccesses
288
migrationStats.failedFiles += batchFailures.length
289
migrationStats.failedFilesList.push(...batchFailures.map((f) => f.path))
290
291
console.log(
292
`āœ… Completed batch ${i + 1}/${batches.length}: ${batchSuccesses} succeeded, ${batchFailures.length} failed`
293
)
294
}
295
296
migrationStats.processedBuckets++
297
console.log(`āœ… Completed bucket '${bucketName}' migration`)
298
}
299
300
console.log('\nšŸ“Š Migration Summary:')
301
console.log(
302
`Buckets: ${migrationStats.processedBuckets}/${migrationStats.totalBuckets} processed, ${migrationStats.skippedBuckets} skipped`
303
)
304
console.log(
305
`Files: ${migrationStats.successfulFiles} succeeded, ${migrationStats.failedFiles} failed (${migrationStats.totalFiles} total)`
306
)
307
308
if (migrationStats.failedFiles > 0) {
309
console.log('\nāš ļø Failed files:')
310
migrationStats.failedFilesList.forEach((path) => console.log(` - ${path}`))
311
return migrationStats
312
}
313
314
return migrationStats
315
}
316
317
migrateBuckets()
318
.then((stats) => {
319
if (stats.failedFiles > 0) {
320
console.log(`\nāš ļø Migration completed with ${stats.failedFiles} failed files.`)
321
process.exit(1)
322
} else {
323
console.log('\nšŸŽ‰ Migration completed successfully!')
324
process.exit(0)
325
}
326
})
327
.catch((err) => {
328
console.error('āŒ Fatal error during migration:', err.message)
329
process.exit(1)
330
})