Storing Vectors
Insert and update vector embeddings with metadata using the JavaScript SDK or Postgres.
This feature is in alpha
Expect rapid changes, limited features, and possible breaking updates. Share feedback as we refine the experience and expand access.
Once you've created a bucket and index, you can start storing vectors. Vectors can include optional metadata for filtering and enrichment during queries.
Basic vector insertion
1import { } from '@supabase/supabase-js'23const = ('https://your-project.supabase.co', 'your-service-key')45// Get bucket and index6const = ...('embeddings')7const = .('documents-openai')89// Insert vectors10const { } = await .({11 : [12 {13 : 'doc-1',14 : {15 : [0.1, 0.2, 0.3 /* ... rest of embedding ... */],16 },17 : {18 : 'Getting Started with Vector Buckets',19 : 'documentation',20 },21 },22 {23 : 'doc-2',24 : {25 : [0.4, 0.5, 0.6 /* ... rest of embedding ... */],26 },27 : {28 : 'Advanced Vector Search',29 : 'blog',30 },31 },32 ],33})3435if () {36 .('Error storing vectors:', )37} else {38 .('✓ Vectors stored successfully')39}Storing vectors from Embeddings API
Generate embeddings using an LLM API and store them directly:
1import { } from '@supabase/supabase-js'2import from 'openai'34const = ('https://your-project.supabase.co', 'your-service-key')56const = new ({7 : ..,8})910// Documents to embed and store11const = [12 { : '1', : 'How to Train Your AI', : 'Guide for training models...' },13 { : '2', : 'Vector Search Best Practices', : 'Tips for semantic search...' },14 {15 : '3',16 : 'Building RAG Systems',17 : 'Implementing retrieval-augmented generation...',18 },19]2021// Generate embeddings22const = await ..({23 : 'text-embedding-3-small',24 : .(() => .),25})2627// Prepare vectors for storage28const = .((, ) => ({29 : .,30 : {31 : .[].,32 },33 : {34 : .,35 : 'knowledge_base',36 : new ().(),37 },38}))3940// Store vectors in batches (max 500 per request)41const = ...('embeddings')42const = .('documents-openai')4344for (let = 0; < .; += 500) {45 const = .(, + 500)46 const { } = await .({ : })4748 if () {49 .(`Error storing batch ${ / 500 + 1}:`, )50 } else {51 .(`✓ Stored batch ${ / 500 + 1} (${.} vectors)`)52 }53}Updating vectors
1const index = bucket.index('documents-openai')23// Update a vector (same key)4const { error } = await index.putVectors({5 vectors: [6 {7 key: 'doc-1',8 data: {9 float32: [0.15, 0.25, 0.35 /* ... updated embedding ... */],10 },11 metadata: {12 title: 'Getting Started with Vector Buckets - Updated',13 updated_at: new Date().toISOString(),14 },15 },16 ],17})1819if (!error) {20 console.log('✓ Vector updated successfully')21}Deleting vectors
1const index = bucket.index('documents-openai')23// Delete specific vectors4const { error } = await index.deleteVectors({5 keys: ['doc-1', 'doc-2'],6})78if (!error) {9 console.log('✓ Vectors deleted successfully')10}Metadata best practices
Metadata makes vectors more useful by enabling filtering and context:
1const vectors = [2 {3 key: 'product-001',4 data: { float32: [...] },5 metadata: {6 product_id: 'prod-001',7 category: 'electronics',8 price: 299.99,9 in_stock: true,10 tags: ['laptop', 'portable'],11 description: 'High-performance ultrabook'12 }13 },14 {15 key: 'product-002',16 data: { float32: [...] },17 metadata: {18 product_id: 'prod-002',19 category: 'electronics',20 price: 99.99,21 in_stock: true,22 tags: ['headphones', 'wireless'],23 description: 'Noise-cancelling wireless headphones'24 }25 }26]2728const { error } = await index.putVectors({ vectors })Metadata field guidelines
- Keep it lightweight - Metadata is returned with query results, so large values increase response size
- Use consistent types - Store the same field with consistent data types across vectors
- Index key fields - Mark fields you'll filter by to improve query performance
- Avoid nested objects - While supported, flat structures are easier to filter
Batch processing large datasets
For storing large numbers of vectors efficiently:
1import { createClient } from '@supabase/supabase-js'2import fs from 'fs'34const supabase = createClient(...)5const index = supabase.storage.vectors6 .from('embeddings')7 .index('documents-openai')89// Read embeddings from file10const embeddingsFile = fs.readFileSync('embeddings.jsonl', 'utf-8')11const lines = embeddingsFile.split('\n').filter(line => line.trim())1213const vectors = lines.map((line, idx) => {14 const { key, embedding, metadata } = JSON.parse(line)15 return {16 key,17 data: { float32: embedding },18 metadata19 }20})2122// Process in batches23const BATCH_SIZE = 50024let processed = 02526for (let i = 0; i < vectors.length; i += BATCH_SIZE) {27 const batch = vectors.slice(i, i + BATCH_SIZE)2829 try {30 const { error } = await index.putVectors({ vectors: batch })3132 if (error) throw error3334 processed += batch.length35 console.log(`Progress: ${processed}/${vectors.length}`)36 } catch (error) {37 console.error(`Batch failed at offset ${i}:`, error)38 // Optionally implement retry logic39 }40}4142console.log('✓ All vectors stored successfully')Performance optimization
Batch operations
Always use batch operations for better performance:
1// ❌ Inefficient - Multiple requests2for (const vector of vectors) {3 await index.putVectors({ vectors: [vector] })4}56// ✅ Efficient - Single batch operation7await index.putVectors({ vectors })Metadata considerations
Keep metadata concise:
1// ❌ Large metadata2metadata: {3 full_document_text: 'Very long document content...',4 detailed_analysis: { /* large object */ }5}67// ✅ Lean metadata8metadata: {9 doc_id: 'doc-123',10 category: 'news',11 summary: 'Brief summary'12}