Cookie Architecture
Design Principles
Privacy First
All progress stored locally by default
No account required to use
No tracking or analytics without explicit consent
User owns their data completely
Zero-knowledge architecture for cloud sync
Resilience
Progress survives browser clears through multiple storage strategies
Export/import functionality for data portability
Automatic backups at key milestones
Recovery mechanisms for corrupted data
Simplicity
Minimal data structure with clear semantics
Efficient storage patterns
Fast read/write operations
Clear data lifecycle with version tracking
Storage Strategy
Primary: localStorage
Attribute Value Key convergence_protocol_v1Format JSON string of progress object Limit ~5MB (sufficient for our needs) Scope Same-origin only
Pros:
Simple, synchronous API
Widely supported across browsers
No setup required
Fast read operations
Cons:
Cleared when browser data is cleared
Same-origin restrictions
Synchronous writes can block UI
No structured data support
Usage Pattern:
// Primary read/write for all progress data
const data = JSON . parse (localStorage. getItem ( 'convergence_protocol_v1' ));
localStorage. setItem ( 'convergence_protocol_v1' , JSON . stringify (data));
Secondary: IndexedDB (for larger data)
Attribute Value Database ConvergenceProtocolVersion 1 Stores progress, reflections, metadata, backups
Use Cases:
Large reflection entries (>100KB)
Export archives
Automatic backup snapshots
Historical data versioning
Schema:
// Object store: progress
// keyPath: 'id'
// { id: 'main', data: <progressObject>, timestamp: <ISO> }
// Object store: reflections
// keyPath: 'dayNumber'
// { dayNumber: 23, content: "...", encrypted: false }
// Object store: backups
// keyPath: 'id', autoIncrement: true
// { timestamp: <ISO>, data: <compressedProgress>, type: 'daily|manual|pre_migration' }
// Object store: metadata
// keyPath: 'key'
// { key: 'schemaVersion', value: '1.0.0' }
Tertiary: Cookies (fallback/compatibility)
Attribute Value Purpose Essential session data only Max Size 4KB per cookie Encryption Required for any sensitive data
Use Cases:
Session continuity flag
Cloud sync ID (if enabled)
Preference hints for SSR
Security Requirements:
// Cookie attributes for security
document.cookie = `cp_session=${ value }; ` +
`Secure; ` + // HTTPS only
`SameSite=Strict; ` + // CSRF protection
`Path=/; ` +
`Max-Age=2592000` ; // 30 days
Data Schema
Progress Object Structure
{
// Schema versioning for migrations
version : "1.0.0" ,
// Anonymous user identity
user : {
id : "uuid-generated-locally" , // Anonymous, never linked to PII
createdAt : "2024-01-15T08:30:00Z" , // First visit timestamp
lastActive : "2024-02-03T19:45:00Z" // Most recent activity
},
// Journey-level tracking
journey : {
startDate : "2024-01-15T08:30:00Z" , // When they began the journey
currentDay : 23 , // 1-40, 0 if not started
totalCompleted : 22 , // Days marked complete
status : "in_progress" // not_started | in_progress | completed
},
// Individual day records (array of 40)
days : [
{
dayNumber: 1 ,
status: "completed" , // locked | unlocked | started | completed
unlockedAt: "2024-01-15T08:30:00Z" ,
startedAt: "2024-01-15T08:32:00Z" ,
completedAt: "2024-01-15T08:37:00Z" ,
timeSpentSeconds: 420 , // Total time in session
reflection: "Today's meditation revealed..." , // or null
revisits: [ // Array of return visits
{
visitedAt: "2024-01-16T09:00:00Z" ,
timeSpentSeconds: 120
}
]
}
// ... days 2-40 follow same structure
],
// Streak tracking for motivation
streaks : {
current : 15 , // Current consecutive days
longest : 15 , // All-time record
history : [ // Historical streak records
{
startDate: "2024-01-15T08:30:00Z" ,
endDate: "2024-01-29T20:00:00Z" ,
length: 15
}
]
},
// User preferences
settings : {
dayStartTime : "06:00" , // When new day unlocks (user's morning)
notifications : false , // Push notification permission
soundEnabled : true , // Audio cues during meditation
cloudSync : false , // Cloud backup enabled
socialFeatures : false , // Any sharing features
theme : "dark" // UI theme preference
},
// Computed statistics
stats : {
totalTimeSpentSeconds : 15420 , // Cumulative meditation time
totalReflections : 18 , // Days with written reflections
averageTimePerDay : 680 , // Average seconds per session
completionRate : 0.95 // completed / started ratio
}
}
Day Status Lifecycle
locked β unlocked β started β completed
β |
βββββββββββββ revisit ββββββββββββββββ
Status Description Transitions lockedDay not yet available β unlocked when previous day completed OR dayStartTime reached unlockedAvailable but not started β started when user opens day startedCurrently in progress β completed when meditation finishes completedFinished, can revisit β started on revisit (preserves completion)
Cloud Sync Object (if enabled)
{
syncId : "uuid-v4-for-this-user" , // Anonymous sync identifier
lastSyncedAt : "2024-02-03T19:45:00Z" , // Server timestamp
deviceId : "uuid-v4-for-this-device" , // Multi-device tracking
encryptedData : "base64-encrypted-blob" , // Client-side encrypted
checksum : "sha256-hash" , // Integrity verification
schemaVersion : "1.0.0" // For server-side validation
}
Export Object Structure
{
exportVersion : "1.0.0" ,
exportedAt : "2024-02-03T19:45:00Z" ,
exportType : "full" , // full | encrypted
data : { /* full progress object */ },
checksum : "sha256-hash" // Verify integrity on import
}
Implementation Details
Write Strategy
class ProgressStorage {
constructor () {
this .writeQueue = [];
this .debounceTimer = null ;
this . DEBOUNCE_MS = 500 ;
}
async save ( progressData ) {
// 1. Queue the update
this .writeQueue. push (progressData);
// 2. Debounce rapid changes
clearTimeout ( this .debounceTimer);
this .debounceTimer = setTimeout (() => {
this . _flushWrites ();
}, this . DEBOUNCE_MS );
}
async _flushWrites () {
// Take latest state from queue
const latestData = this .writeQueue[ this .writeQueue. length - 1 ];
this .writeQueue = [];
try {
// 1. Always write to localStorage first (fast, synchronous)
const serialized = JSON . stringify (latestData);
localStorage. setItem ( 'convergence_protocol_v1' , serialized);
// 2. Update IndexedDB backup (async, non-blocking)
this . _updateIndexedDB (latestData). catch (console.error);
// 3. Queue for cloud sync if enabled
if (latestData.settings.cloudSync) {
this . _queueCloudSync (latestData). catch (console.error);
}
} catch (error) {
// Handle quota exceeded or other errors
this . _handleWriteError (error, latestData);
}
}
async _updateIndexedDB ( data ) {
const db = await openDB ( 'ConvergenceProtocol' , 1 );
await db. put ( 'progress' , {
id: 'main' ,
data: data,
timestamp: new Date (). toISOString ()
});
}
_handleWriteError ( error , data ) {
if (error.name === 'QuotaExceededError' ) {
// Try compression or cleanup
this . _compressAndRetry (data);
} else {
// Fall back to IndexedDB
this . _updateIndexedDB (data);
}
}
}
Read Strategy
async function loadProgress () {
const DEFAULT_STATE = {
version: "1.0.0" ,
user: { id: generateUUID (), createdAt: now (), lastActive: now () },
journey: { startDate: null , currentDay: 0 , totalCompleted: 0 , status: "not_started" },
days: initializeDays (),
streaks: { current: 0 , longest: 0 , history: [] },
settings: { dayStartTime: "06:00" , notifications: false , soundEnabled: true , cloudSync: false , socialFeatures: false , theme: "dark" },
stats: { totalTimeSpentSeconds: 0 , totalReflections: 0 , averageTimePerDay: 0 , completionRate: 0 }
};
// 1. Try localStorage first
const localData = localStorage. getItem ( 'convergence_protocol_v1' );
if (localData) {
try {
const parsed = JSON . parse (localData);
const migrated = await migrateIfNeeded (parsed);
return migrated;
} catch (e) {
console. error ( 'Failed to parse localStorage data:' , e);
}
}
// 2. If empty/missing, try IndexedDB
try {
const db = await openDB ( 'ConvergenceProtocol' , 1 );
const record = await db. get ( 'progress' , 'main' );
if (record && record.data) {
// Restore to localStorage
localStorage. setItem ( 'convergence_protocol_v1' , JSON . stringify (record.data));
return record.data;
}
} catch (e) {
console. error ( 'Failed to read from IndexedDB:' , e);
}
// 3. If cloud sync enabled and local empty, fetch from cloud
const syncId = getCookie ( 'cp_sync_id' );
if (syncId) {
try {
const cloudData = await fetchFromCloud (syncId);
if (cloudData) {
const decrypted = await decryptLocally (cloudData.encryptedData);
localStorage. setItem ( 'convergence_protocol_v1' , JSON . stringify (decrypted));
return decrypted;
}
} catch (e) {
console. error ( 'Failed to fetch from cloud:' , e);
}
}
// 4. Return default state if all empty
return DEFAULT_STATE ;
}
Migration Strategy
const MIGRATIONS = {
'1.0.0' : ( data ) => {
// Current version, no migration needed
return data;
},
// Future migrations added here
// '1.1.0': (data) => { ... }
};
async function migrateIfNeeded ( data ) {
const currentVersion = '1.0.0' ;
const dataVersion = data.version || '0.0.0' ;
if (dataVersion === currentVersion) {
return data;
}
// Create backup before migration
await createBackup (data, 'pre_migration' );
// Apply migrations in sequence
let migratedData = { ... data };
const versions = Object. keys ( MIGRATIONS ). sort ();
for ( const version of versions) {
if ( compareVersions (dataVersion, version) < 0 ) {
console. log ( `Migrating from ${ migratedData . version } to ${ version }` );
migratedData = MIGRATIONS [version](migratedData);
migratedData.version = version;
}
}
// Log migration event
await logEvent ( 'migration' , {
fromVersion: dataVersion,
toVersion: currentVersion,
timestamp: new Date (). toISOString ()
});
return migratedData;
}
Encryption (for sensitive data)
Local Encryption Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ENCRYPTION FLOW β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β User Passphrase (optional) β
β β β
β βΌ β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β PBKDF2 βββββΆβ AES-GCM βββββΆβ Encrypted β β
β β Key Deriveβ β Encrypt β β Data β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β
β Salt: Stored in plaintext β
β IV: Stored with encrypted data β
β Key: Never stored, derived on demand β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Implementation
class LocalEncryption {
constructor () {
this . ALGORITHM = 'AES-GCM' ;
this . KEY_LENGTH = 256 ;
this . ITERATIONS = 100000 ;
}
async deriveKey ( passphrase , salt ) {
const encoder = new TextEncoder ();
const keyMaterial = await crypto.subtle. importKey (
'raw' ,
encoder. encode (passphrase),
'PBKDF2' ,
false ,
[ 'deriveKey' ]
);
return crypto.subtle. deriveKey (
{
name: 'PBKDF2' ,
salt: salt,
iterations: this . ITERATIONS ,
hash: 'SHA-256'
},
keyMaterial,
{ name: this . ALGORITHM , length: this . KEY_LENGTH },
false ,
[ 'encrypt' , 'decrypt' ]
);
}
async encrypt ( plaintext , passphrase ) {
const salt = crypto. getRandomValues ( new Uint8Array ( 16 ));
const iv = crypto. getRandomValues ( new Uint8Array ( 12 ));
const key = await this . deriveKey (passphrase, salt);
const encoder = new TextEncoder ();
const ciphertext = await crypto.subtle. encrypt (
{ name: this . ALGORITHM , iv },
key,
encoder. encode (plaintext)
);
// Return: salt + iv + ciphertext
const result = new Uint8Array (salt. length + iv. length + ciphertext.byteLength);
result. set (salt, 0 );
result. set (iv, salt. length );
result. set ( new Uint8Array (ciphertext), salt. length + iv. length );
return btoa (String. fromCharCode ( ... result));
}
async decrypt ( encryptedBase64 , passphrase ) {
const encrypted = Uint8Array. from ( atob (encryptedBase64), c => c. charCodeAt ( 0 ));
const salt = encrypted. slice ( 0 , 16 );
const iv = encrypted. slice ( 16 , 28 );
const ciphertext = encrypted. slice ( 28 );
const key = await this . deriveKey (passphrase, salt);
const decrypted = await crypto.subtle. decrypt (
{ name: this . ALGORITHM , iv },
key,
ciphertext
);
return new TextDecoder (). decode (decrypted);
}
}
What to Encrypt
Data Field Default Optional Encryption Reflections Plaintext β
User can enable Personal notes Plaintext β
User can enable Progress metrics Plaintext β No (needed for UI) Settings Plaintext β No Streaks Plaintext β No
Cloud Encryption
class CloudEncryption {
// Client-side encryption before upload
// Server NEVER sees plaintext
async prepareForSync ( progressData , encryptionKey ) {
// 1. Serialize
const json = JSON . stringify (progressData);
// 2. Compress
const compressed = await compress (json);
// 3. Encrypt
const encrypted = await this . encrypt (compressed, encryptionKey);
// 4. Add integrity checksum
const checksum = await this . computeChecksum (encrypted);
return {
encryptedData: btoa (encrypted),
checksum,
schemaVersion: progressData.version
};
}
async processSyncResponse ( syncData , encryptionKey ) {
// 1. Verify integrity
const computedChecksum = await this . computeChecksum (syncData.encryptedData);
if (computedChecksum !== syncData.checksum) {
throw new Error ( 'Data integrity check failed' );
}
// 2. Decrypt
const decrypted = await this . decrypt (syncData.encryptedData, encryptionKey);
// 3. Decompress
const decompressed = await decompress (decrypted);
// 4. Parse
return JSON . parse (decompressed);
}
}
Sync Architecture
Cloud Sync Flow
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β Client β β Internet β β Server β
β Device β β β β (Zero β
β β β β β Knowledge) β
ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββ¬ββββββββ
β β β
β 1. Enable Sync β β
β ββββββββββββββββββββββΆβ β
β Generate syncId β β
β β β
β 2. Encrypt locally β β
β (client-side) β β
β β β
β 3. Upload β β
β ββββββββββββββββββββββΆβββββββββββββββββββββββββΆβ
β {syncId, encrypted} β β
β β β
β β 4. Store encrypted blobβ
β β β
β 5. Confirmation β β
β ββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
β 6. Other devices β β
β fetch by syncId β β
β ββββββββββββββββββββββΆβββββββββββββββββββββββββΆβ
β β β
β 7. Decrypt locally β β
β (client-side) β β
β β β
Conflict Resolution
class ConflictResolver {
resolve ( localData , serverData ) {
const localTime = new Date (localData.user.lastActive). getTime ();
const serverTime = new Date (serverData.user.lastActive). getTime ();
// Strategy 1: Last-write-wins for most data
const baseData = localTime > serverTime ? localData : serverData;
// Strategy 2: Merge for streaks (take maximum)
baseData.streaks.current = Math. max (
localData.streaks.current,
serverData.streaks.current
);
baseData.streaks.longest = Math. max (
localData.streaks.longest,
serverData.streaks.longest
);
// Strategy 3: Merge day completion (OR operation)
baseData.days = baseData.days. map (( day , index ) => {
const localDay = localData.days[index];
const serverDay = serverData.days[index];
return {
... day,
status: this . mergeDayStatus (localDay.status, serverDay.status),
timeSpentSeconds: Math. max (
localDay.timeSpentSeconds || 0 ,
serverDay.timeSpentSeconds || 0
),
// Reflection: prefer local if both exist, otherwise take whichever exists
reflection: localDay.reflection || serverDay.reflection || null
};
});
// Strategy 4: Prompt user for reflection conflicts
const reflectionConflicts = this . findReflectionConflicts (localData, serverData);
if (reflectionConflicts. length > 0 ) {
this . queueConflictPrompt (reflectionConflicts);
}
return baseData;
}
mergeDayStatus ( localStatus , serverStatus ) {
// Priority: completed > started > unlocked > locked
const priority = { locked: 0 , unlocked: 1 , started: 2 , completed: 3 };
return priority[localStatus] >= priority[serverStatus] ? localStatus : serverStatus;
}
}
Backup & Recovery
Automatic Backups
Trigger Destination Retention Daily (first visit) IndexedDB Last 7 days Pre-migration IndexedDB Last 10 migrations Day completion IndexedDB All completions Weekly Export prompt User decides
class BackupManager {
async createBackup ( data , type = 'manual' ) {
const backup = {
id: generateUUID (),
timestamp: new Date (). toISOString (),
type, // 'daily' | 'manual' | 'pre_migration' | 'completion'
data: await compress ( JSON . stringify (data)),
checksum: await computeChecksum (data)
};
const db = await openDB ( 'ConvergenceProtocol' , 1 );
await db. add ( 'backups' , backup);
// Cleanup old backups
await this . cleanupOldBackups (type);
return backup.id;
}
async cleanupOldBackups ( type ) {
const db = await openDB ( 'ConvergenceProtocol' , 1 );
const allBackups = await db. getAll ( 'backups' );
const limits = {
daily: 7 ,
pre_migration: 10 ,
completion: 40 , // Keep all day completions
manual: Infinity
};
const typeBackups = allBackups. filter ( b => b.type === type);
if (typeBackups. length > limits[type]) {
const toDelete = typeBackups
. sort (( a , b ) => new Date (b.timestamp) - new Date (a.timestamp))
. slice (limits[type]);
for ( const backup of toDelete) {
await db. delete ( 'backups' , backup.id);
}
}
}
async restoreFromBackup ( backupId ) {
const db = await openDB ( 'ConvergenceProtocol' , 1 );
const backup = await db. get ( 'backups' , backupId);
if ( ! backup) {
throw new Error ( 'Backup not found' );
}
const decompressed = await decompress (backup.data);
const data = JSON . parse (decompressed);
// Verify integrity
const checksum = await computeChecksum (data);
if (checksum !== backup.checksum) {
throw new Error ( 'Backup integrity check failed' );
}
// Restore to localStorage
localStorage. setItem ( 'convergence_protocol_v1' , JSON . stringify (data));
return data;
}
}
Manual Export
class ExportManager {
// JSON download
async exportToJSON ( data , encrypted = false ) {
const exportObj = {
exportVersion: '1.0.0' ,
exportedAt: new Date (). toISOString (),
exportType: encrypted ? 'encrypted' : 'full' ,
data: encrypted ? await encryptForExport (data) : data,
checksum: await computeChecksum (data)
};
const blob = new Blob ([ JSON . stringify (exportObj, null , 2 )], {
type: 'application/json'
});
const url = URL . createObjectURL (blob);
const a = document. createElement ( 'a' );
a.href = url;
a.download = `convergence-protocol-backup-${ formatDate ( new Date ()) }.json` ;
a. click ();
URL . revokeObjectURL (url);
}
// QR code for mobile transfer
async exportToQR ( data ) {
// Compress and chunk for QR size limits
const compressed = await compress ( JSON . stringify (data));
const chunks = this . chunkData (compressed, 2000 ); // QR code limit
if (chunks. length > 1 ) {
// Multi-part QR code
return this . generateMultiPartQR (chunks);
}
return this . generateQR (chunks[ 0 ]);
}
chunkData ( data , maxSize ) {
const chunks = [];
for ( let i = 0 ; i < data. length ; i += maxSize) {
chunks. push (data. slice (i, i + maxSize));
}
return chunks;
}
}
Recovery Scenarios
Scenario Detection Recovery Action Browser data cleared localStorage empty, IndexedDB has backup Restore from IndexedDB backup Device lost New device, syncId in cookie Fetch from cloud, decrypt locally Corrupted data Checksum mismatch Restore from most recent valid backup Starting fresh User explicitly resets Import from export file Version mismatch Schema version differs Run migration, restore from pre-migration backup if fails
async function attemptRecovery () {
// 1. Check for corruption
const localData = localStorage. getItem ( 'convergence_protocol_v1' );
if (localData) {
try {
const parsed = JSON . parse (localData);
if ( await verifyIntegrity (parsed)) {
return parsed; // Data is valid
}
} catch (e) {
console. error ( 'Local data corrupted:' , e);
}
}
// 2. Try IndexedDB backups
const db = await openDB ( 'ConvergenceProtocol' , 1 );
const backups = await db. getAll ( 'backups' );
for ( const backup of backups. sort (( a , b ) =>
new Date (b.timestamp) - new Date (a.timestamp)
)) {
try {
const restored = await restoreFromBackup (backup.id);
console. log ( `Recovered from backup: ${ backup . timestamp }` );
return restored;
} catch (e) {
console. error ( 'Backup restore failed:' , e);
}
}
// 3. Try cloud sync
const syncId = getCookie ( 'cp_sync_id' );
if (syncId) {
try {
const cloudData = await fetchFromCloud (syncId);
if (cloudData) {
return await decryptLocally (cloudData.encryptedData);
}
} catch (e) {
console. error ( 'Cloud recovery failed:' , e);
}
}
// 4. Return default state
console. warn ( 'All recovery methods failed, starting fresh' );
return DEFAULT_STATE ;
}
Privacy Considerations
What We Store
Category Data Purpose Progress Day completion, time spent Core functionality Journey Start date, current day Progress tracking Reflections User-written content Personal growth record Settings Preferences User experience Stats Aggregated metrics Motivation & insights
What We Donβt Store
Category Why Not Alternative Email address Not required for functionality Optional cloud sync uses anonymous ID Name Not required Display name optional, stored locally only IP address Privacy risk No server logging Device fingerprint Tracking risk No analytics Third-party cookies Privacy violation First-party only
User Controls
class PrivacyControls {
// View all stored data
async getAllStoredData () {
return {
localStorage: this . getLocalStorageData (),
indexedDB: await this . getIndexedDBData (),
cookies: this . getCookieData ()
};
}
// Export all data (GDPR data portability)
async exportAllData () {
const allData = await this . getAllStoredData ();
return {
exportVersion: '1.0.0' ,
exportedAt: new Date (). toISOString (),
data: allData
};
}
// Delete all data (GDPR right to erasure)
async deleteAllData () {
// 1. Clear localStorage
localStorage. removeItem ( 'convergence_protocol_v1' );
// 2. Clear IndexedDB
const db = await openDB ( 'ConvergenceProtocol' , 1 );
await db. clear ( 'progress' );
await db. clear ( 'reflections' );
await db. clear ( 'backups' );
await db. clear ( 'metadata' );
// 3. Clear cookies
document.cookie = 'cp_session=; Max-Age=0; Path=/' ;
document.cookie = 'cp_sync_id=; Max-Age=0; Path=/' ;
// 4. Delete from cloud if enabled
const syncId = getCookie ( 'cp_sync_id' );
if (syncId) {
await deleteFromCloud (syncId);
}
// 5. Reset to default state
return DEFAULT_STATE ;
}
// Opt-out of everything
async optOutOfEverything () {
const data = await loadProgress ();
data.settings.cloudSync = false ;
data.settings.notifications = false ;
data.settings.socialFeatures = false ;
await saveProgress (data);
// Delete cloud data if exists
await this . deleteCloudData ();
}
}
Storage Budget
Storage Type Typical Size Maximum Notes localStorage ~50KB 5MB Primary storage IndexedDB ~200KB 50MB+ Backups & large data Cookies <1KB 4KB Session only Total ~250KB Well under limits
Size Breakdown
Progress Object (typical):
βββ Base structure: ~2KB
βββ 40 day records: ~20KB (500 bytes each)
βββ Reflections: ~20KB (average 500 chars)
βββ Streak history: ~2KB
βββ Settings & stats: ~1KB
Total: ~45KB
With compression: ~15-20KB
Optimization Strategies
class StorageOptimizer {
// Compress large reflections
async compressReflections ( days ) {
for ( const day of days) {
if (day.reflection && day.reflection. length > 500 ) {
day.reflection = await compress (day.reflection);
day.reflectionCompressed = true ;
}
}
return days;
}
// Lazy load day details
async getDayDetails ( dayNumber ) {
// Check if full details are in memory
if ( this .dayCache. has (dayNumber)) {
return this .dayCache. get (dayNumber);
}
// Load from IndexedDB
const db = await openDB ( 'ConvergenceProtocol' , 1 );
const details = await db. get ( 'dayDetails' , dayNumber);
this .dayCache. set (dayNumber, details);
return details;
}
// Cache computed metrics
getCachedStats ( progressData ) {
const cacheKey = this . computeCacheKey (progressData);
if ( this .statsCache?.key === cacheKey) {
return this .statsCache.value;
}
const stats = this . computeStats (progressData);
this .statsCache = { key: cacheKey, value: stats };
return stats;
}
// Debounce all writes
debouncedSave ( data ) {
clearTimeout ( this .saveTimeout);
this .saveTimeout = setTimeout (() => {
this . save (data);
}, 500 );
}
}
Operation Target Worst Case Load progress <50ms <200ms (with recovery) Save progress <10ms <100ms (with sync) Export data <100ms <500ms (with encryption) Import data <100ms <500ms (with migration) Day unlock check <1ms <5ms
Security Checklist
API Reference
Core Functions
// Load progress from any available source
async function loadProgress () : Promise < ProgressObject >
// Save progress to all configured storage
async function saveProgress ( data : ProgressObject ) : Promise < void >
// Export progress to file
async function exportProgress ( options ?: ExportOptions ) : Promise < Blob >
// Import progress from file
async function importProgress ( file : File ) : Promise < ProgressObject >
// Delete all stored data
async function deleteAllData () : Promise < void >
// Enable cloud sync
async function enableCloudSync () : Promise < SyncConfig >
// Disable cloud sync and delete cloud data
async function disableCloudSync () : Promise < void >
Events
// Storage events for cross-tab synchronization
window. addEventListener ( 'storage' , ( e ) => {
if (e.key === 'convergence_protocol_v1' ) {
// Another tab updated progress
emit ( 'progress:externalUpdate' , JSON . parse (e.newValue));
}
});
// Custom events
emit ( 'progress:saved' , { timestamp, size });
emit ( 'progress:loaded' , { source, version });
emit ( 'progress:synced' , { direction, timestamp });
emit ( 'progress:backupCreated' , { id, type, timestamp });
emit ( 'progress:error' , { type, error, recoverable });
Version History
Version Date Changes 1.0.0 2024-01-15 Initial release
This architecture prioritizes user privacy and data ownership while providing a resilient, performant storage solution for The Convergence Protocol meditation journey.