Refer to the Storage guide on how access control works
Resumable uploads use a Disk cache by default to store the upload urls. You can customize that in the GoTrue config by changing the resumable.cache property.
Examples
Upload file
val bucket = supabase.storage["avatars"]
bucket.upload("myIcon.png", byteArray, upsert = false)
//on JVM you can use java.io.File
bucket.upload("myIcon.png", file, upsert = false)
Upload file with progress
val bucket = supabase.storage["avatars"]
bucket.uploadAsFlow("test.png", byteArrayOf()).collect {
when(it) {
is UploadStatus.Progress -> println("Progress: ${it.totalBytesSend.toFloat() / it.contentLength * 100}%")
is UploadStatus.Success -> println("Success")
}
}
Create resumable upload
val bucket = supabase.storage["avatars"]
//JVM/Android:
val upload = bucket.resumable.createOrContinueUpload("icon.png", File("icon.png"))
//Other platforms:
val upload = bucket.resumable.createOrContinueUpload(data = byteArray, source = "this is for continuing previous uploads later", path = "icon.png")
val upload = bucket.resumable.createOrContinueUpload( //Probably better to write an extension function
channel = { offset -> /* create ByteReadChannel and seek to offset */ },
source = "this is for continuing previous uploads later",
size = dataSize,
path = "icon.png"
)
val bucket = supabase.storage["avatars"]
//only on JVM/Android:
bucket.resumable.continuePreviousFileUploads()
.map { it.await() } //await all uploads. This just makes sure the uploads have an update-to-date url. You can also do this in parallel
.forEach { upload ->
upload.startOrResumeUploading()
}
//on other platforms you may have to continue uploads from the source (Probably better to write an extension function):
bucket.resumable.continuePreviousUploads { source, offset ->
//create ByteReadChannel from source and seek to offset
}
.map { it.await() } //await all uploads. This just makes sure the uploads have an update-to-date url. You can also do this in parallel
.forEach { upload ->
upload.startOrResumeUploading()
}