Memory-Continuous Architecture · JVM · Python

Where functions share
memory, not messages.

Deploy like microservices. Perform like a monolith. Evolve like a living organism. A Kubernetes-native runtime for zero-copy function composition.

curl -sf https://kubefn.com/install.sh | sh
4-18x
faster for a 7-function checkout pipeline (full HTTP cycle, measured with hey)
Microservices
14-70ms
7 HTTP hops + JSON ser/deser each
KubeFn JVM
3.8ms
1 HTTP call, 7 in-memory steps
Python: 6-30x
ML pipeline · 1.0ms avg · 7,455 rps
Node.js: 20-100x
API gateway · 0.3ms avg · 33K rps
Full HTTP request-response cycle. Measured with hey, 1000 req, 10 concurrent.

See it in action

Deploy 4 functions. Watch them share memory. Process a request in microseconds.

kubefn-organism
KubeFn Organism (Shared JVM)
HeapExchange: auth price fraud quote
🔒
AuthFunction
/auth
0.08ms
💰
PricingFunction
/pricing
0.12ms
🛡
FraudFunction
/fraud
0.15ms
📦
QuoteAssembler
/checkout
0.03ms
4 functions · 0.38ms · 0 serialization · 0 HTTP calls

The architecture trilemma — solved.

Every platform before KubeFn forces you to pick two. KubeFn gives you all three.

Monolith

Shared memory   Independent deploy   Hot-swap

Microservices

Shared memory   Independent deploy   Hot-swap

FaaS / Serverless

Shared memory   Independent deploy   Hot-swap

KubeFn

Shared memory   Independent deploy   Hot-swap

The Revolutionary Primitives

🧬

HeapExchange

Zero-copy shared object graph between functions. No serialization. No network. Same memory address. Function A's output IS Function B's input.

Born-Warm Deploys

New function revisions enter an already-hot JVM. Shared libraries are JIT-compiled. Peak performance in <1 second, not 30+ seconds.

🔄

Hot-Swap

Replace individual functions while traffic flows. Zero dropped requests. The organism lives — only the organ is replaced. Tested: 200/200 successful.

📑

FnGraph Pipelines

Compose functions into in-memory execution graphs. The runtime owns the graph, traces it, and can optimize it. 7 steps in 0.458ms.

🛡

Circuit Breakers

Per-function failure isolation. If one function fails, the breaker trips — protecting the shared organism from cascade failures.

🔍

Causal Tracing

OpenTelemetry spans per function with revision IDs, request lineage, and heap mutation tracking. See exactly what happened in memory.

Multi-runtime. One concept.

Memory-Continuous Architecture isn't language-specific.
Same shared-memory concept. Different runtimes.

Java
Virtual threads · 12 enterprise examples
Kotlin
Data classes · Extension functions
Scala
Pattern matching · Functional
Groovy
DSLs · Dynamic scripting
Python
ML pipelines · 1.1-5.5x faster
Node.js
API gateways · 2-10x faster

Every container you don't need

Every sidecar, init container, cron job, and queue worker in Kubernetes is a container that doesn't need to exist. KubeFn runs them all as functions in a single warm JVM — zero cold starts, zero container builds.

🕘 Cron Jobs & Schedulers
Today: Full container build for a 200ms task. 30s JVM cold start. Idle pods.
KubeFn: @FnSchedule(cron = "0 0/15 * * *") — runs in the warm JVM.
@FnSchedule(cron = "0 0/15 * * *")
@FnRoute(path = "/cleanup")
public class SessionCleanup implements KubeFnHandler {
    // Runs on schedule AND via HTTP. Shared heap access.
}
📥 Queue Workers
Today: Separate deployment per queue. Each deserializes messages independently.
KubeFn: @FnQueue(topic = "orders") — consumes in-process, zero-copy.
@FnQueue(topic = "orders", concurrency = 4)
public class OrderProcessor implements KubeFnHandler {
    // Consumes from queue. DLQ on failure. Shared heap.
}
🛡 Sidecar Killer
Today: Auth proxy + metrics shipper + config agent = 3x pod resources.
KubeFn: Auth, metrics, config as in-process functions. One pod.
Before: 3 sidecars × 500 pods = 1,500 extra containers
After:  3 functions in shared JVM = 0 extra containers
🔌 Webhooks & Adapters
Today: One service per webhook source. Idle pods for sporadic events.
KubeFn: Webhook = function. Shares heap with downstream. No idle infra.
Stripe + GitHub + Slack + Salesforce webhooks
= 4 functions, 1 JVM, 0 idle pods
Init Container Replacement
Today: Migrations, config gen, dependency checks = startup delays.
KubeFn: @FnLifecyclePhase(phase = INIT) — runs in the warm organism.
Preload reference data, verify deps, warm caches
All before traffic starts. Zero extra containers.
🔥 Cache Warmers
Today: Cron job loads data into Redis. Every service queries Redis.
KubeFn: Function loads into HeapExchange. All functions read zero-copy.
Product catalog, price tables, feature flags
One load, all functions read. No Redis needed.

Also: API aggregation, rate limiters, feature flags, audit trails, ETL stages, stream processors, saga coordinators, notification dispatchers, data validators, health monitors, retry handlers, protocol translators, rules engines, logging pipelines, metric aggregators...

Type-safe heap access with HeapKey<T>

Compile-time safe. IDE autocomplete. No string typos. No type mismatches.

// Define typed keys in your contracts module
public final class HeapKeys {
    public static final HeapKey<PricingResult> PRICING =
        HeapKey.of("pricing:current", PricingResult.class);
}

// Use in functions — wrong type won't compile
PricingResult pricing = ctx.heap().require(HeapKeys.PRICING);
ctx.heap().publish(HeapKeys.TAX, taxResult);

Get started in 30 seconds

Install the CLI, scaffold a function, deploy.

# Install CLI
brew tap kubefn/tap && brew install kubefn
# Or: curl -sf https://kubefn.com/install.sh | sh

# Scaffold a function
kubefn init pricing-engine checkout-service

# Add dependency (Maven Central — no extra repos needed)
compileOnly("com.kubefn:kubefn-api:0.3.1")

# Start local runtime with hot-reload
kubefn dev

# Deploy to Kubernetes
helm install kubefn kubefn/kubefn

No Kubernetes? Try KubeFn Lite

Same HeapExchange. Same FnGraph. Same hot-swap. Zero infrastructure.

# Install from ClawHub or pip
pip install kubefn-lite

# Deploy a function
engine.deploy(
    name="pricing",
    source_code='def handler(input): return {"price": input["qty"] * 9.99}',
    entry_point="handler"
)

# Compose a pipeline — shared memory, zero serialization
result = engine.run_graph({
    "stages": [{"function": "pricing"}, {"function": "tax"}]
})

# Hot-swap live — no restart
engine.hot_swap("pricing", new_source_code)

GitHub · 20 tests passing · Python + Node.js + JVM · Ready for production? → Deploy on Kubernetes

Write a function in 10 lines

Functions are independently deployable but collaborate through shared heap objects.

// Zero-copy: publish once, read everywhere
@FnRoute(path = "/pricing")
@FnGroup("checkout")
public class PricingFunction implements KubeFnHandler {

    public KubeFnResponse handle(KubeFnRequest req) {
        // Read from HeapExchange — SAME object, zero copy
        var auth = ctx.heap().get("auth", Map.class);

        var price = Map.of("total", 84.99);
        ctx.heap().publish("price", price, Map.class);

        return KubeFnResponse.ok(price);
    }
}

Hot-swap under fire

Replace a function while 200 requests are in flight. Zero downtime.

Total requests: 200
Successful: 200
Failed: 0
Dropped: 0
 
v1 responses: 51 (15% discount)
v2 responses: 149 (25% discount)
 
Zero downtime. Born warm. The organism lives.

Research Paper

Decoupling Deployment Boundaries from Memory Boundaries in Function Composition
Pranab Sarkar · March 2026 · Preprint

The dominant abstraction in serverless composition conflates deployment isolation with data representation boundaries. This paper introduces Memory-Continuous Architecture — a runtime model that decouples these concerns, enabling independently deployable functions to share in-memory object graphs with zero serialization.

Read Paper (DOI) PDF Source Code
DOI: 10.5281/zenodo.19161471 · ORCID: 0009-0009-8683-1481