Tech stack at AdPriva

Background

AdPriva exists because we believe advertising verification should not require user tracking. Every architectural decision we make flows from this principle: proving genuine ad engagement through cryptography rather than surveillance. Our approach requires us to rethink how data moves across the ad ecosystem, how cryptographic proofs can guarantee correctness without revealing identity, and how real-time systems can remain both fast and trustworthy under high load.

Achieving this balance means making continuous tradeoffs between long-term architectural soundness and short-term iteration speed. It also requires relentless focus on latency, stability, and design — whether it is the JavaScript tag running on a publisher’s page, the backend processing millions of anonymous events, or the mobile vault app that lets users control their preferences without exposing them. We maintain a disciplined engineering culture that avoids unnecessary complexity while refusing to compromise on correctness and verifiability.

The stack

We work in a single monorepo containing backend services, cryptographic components, event pipelines, dashboards, SDKs, and mobile applications. This unified structure provides a shared foundation for code reuse and consistent patterns across the entire system. The backend and mobile infrastructure share substantial logic written in Rust, which allows us to centralize cryptographic routines, proof generation, data models, and validation flows in one place. Even though we incorporate Go, TypeScript, Swift, and Kotlin, these languages all interact with the same schemas, specifications, and Rust libraries through generated interfaces. This gives us the ability to ship rapidly without fragmenting the core logic that defines how events are processed or how proofs are constructed.

Our approach emphasizes modularity. Rather than building large monolithic services, we break systems into small, isolated components that compile independently. This allows developers to move quickly without triggering expensive rebuilds across the entire codebase. It also means that features can be developed in parallel while maintaining strict boundaries around sensitive logic such as hashing, HMAC signing, Merkle batching, and zero-knowledge proof preparation.

On the frontend, we use React and TypeScript to build a highly responsive dashboard environment. The publisher console surfaces real-time proof batches, event flows, verification status, and campaign insights. We avoid heavy third-party dependencies unless they bring structural value, preferring to keep our UI layer lightweight, fast, and predictable.

Cryptographic infrastructure

This is where AdPriva differs fundamentally from traditional adtech. Our verification model replaces tracking with mathematics.

Zero-knowledge proofs allow us to verify that ad engagement occurred without revealing who engaged. We integrate with zkVerify for proof verification, creating cryptographic attestations that confirm genuine interaction while the user remains completely anonymous. The proof establishes the fact without exposing the witness. This is not a privacy feature bolted onto a surveillance system — it is the verification mechanism itself.

Every engagement event gets hashed into a Merkle tree structure. This provides tamper-evident aggregation that lets advertisers verify the integrity of rolled-up metrics without accessing individual records. If anyone attempts to modify historical data, the tree roots will not match. Advertisers get verifiable proof that their numbers are real without needing to trust us or see raw data.

We sign engagement events at the point of capture using HMAC. This ensures authenticity as data flows through our pipeline. By the time an event reaches proof generation, we can cryptographically verify it has not been tampered with since the SDK recorded it. The chain of custody is mathematically provable.

Proof batches are anchored to Horizen, creating immutable timestamps that neither AdPriva nor anyone else can retroactively modify. This gives advertisers an external, independently verifiable record that proofs existed at a specific point in time. The blockchain becomes a public notary for our verification claims.

iOS

Our iOS architecture is intentionally modular. The mobile app comprises many Swift modules that separate networking, cryptographic operations, storage, preference vault logic, UI components, and feature layers. These modules link against Rust libraries compiled into Swift-friendly interfaces, giving our mobile codebase access to the same cryptographic primitives used on the backend. This ensures that proofs, hashing, and validation behave identically whether they execute on a user’s device or in our infrastructure.

UIKit remains our primary UI framework given its maturity, performance characteristics, and long-term stability. We selectively use SwiftUI where its declarative nature accelerates development without compromising control. In components such as interactive animations or engagement visualization, we extend into Metal to unlock GPU-accelerated transitions and render effects that standard frameworks cannot achieve.

For dependency management we take a minimalist approach, keeping architectural boundaries clear and avoiding unnecessary external code in security-critical paths. Build times remain manageable thanks to stable module boundaries and careful avoidance of unnecessary rebuild triggers.

Android

Our Android architecture mirrors the iOS philosophy while embracing the strengths of the Android ecosystem. The codebase is deeply modular, with each component built as an independent Gradle module optimized for low coupling and high cohesion. Hilt handles dependency injection, and the strategic split between api and implementation modules keeps incremental builds efficient even as the app grows.

We rely on Jetpack libraries to handle navigation, lifecycle management, persistence, and concurrency. Compose serves as our primary UI toolkit because it enables rapid development with minimal boilerplate, but we do not hesitate to fall back to Views when deterministic performance is required. For highly visual components we employ Vulkan and custom shader work, applying low-level graphics capabilities selectively where they make a meaningful difference.

As with iOS, the Android app links to the same Rust foundational libraries, with Kotlin bindings generated to hide FFI complexities entirely. This ensures that hashing, signing, and preference vault operations behave consistently across both major mobile platforms. A bug fixed in Rust is fixed everywhere simultaneously.

Mobile infrastructure

One of the most impactful architectural decisions we made was to unify mobile and backend logic using Rust. Networking, validation, cryptographic utilities, data models, and proof-preparation logic are written once and consumed by both iOS and Android through generated interfaces. This drastically reduces the risk of behavioral drift between platforms and allows engineers familiar with backend systems to contribute directly to client-side infrastructure without context switching.

Because the monorepo houses all components, integrating Rust code into Swift and Kotlin is straightforward. The shared libraries compile into lightweight modules that final mobile targets depend upon. Internally, our mobile apps use reactive patterns to communicate with Rust components in a way that hides FFI boundaries entirely and lets engineers focus purely on business logic rather than cross-language plumbing.

Production environment

Environment

Our production environment runs on Google Cloud Platform, with multi-cloud capabilities planned as we scale into additional markets and integrate with more DSPs and verification partners. Infrastructure as Code through Pulumi ensures every environment is reproducible, versioned, and secure. This lets our engineering team deploy frequently and confidently, often multiple times per day when velocity demands it.

The backend begins as a single deployable unit for rapid iteration, containing services for ingestion, batching, proof generation, fraud scoring, and verification. As traffic increases, individual components can migrate to separate deployments that scale independently without unnecessary resource duplication. This philosophy acknowledges that system migration is inevitable and designs for it from the start, reducing the pain historically associated with transitioning from monolith to microservices.

Language

Rust is our core backend language. While performance, memory safety, and concurrency are significant advantages, the real reason we chose Rust is its ability to support rapid iteration while keeping the codebase clean and correct. The type system, traits, pattern matching, and strict compilation model allow us to make sweeping changes confidently, knowing the compiler acts as a guardrail. The more complex our proof pipeline becomes, the more critical these guarantees are. When cryptographic correctness is table stakes, “if it compiles, it works” is not a slogan but a necessity.

Go is used alongside Rust for services where simplicity, familiar concurrency patterns, or rapid prototyping provide an advantage. Both languages integrate seamlessly through well-defined interfaces and gRPC schemas, maintaining strong consistency across the system without forcing every problem into the same solution.

Monitoring and alerting

We use Prometheus, OpenTelemetry, Jaeger, and Grafana for metrics, tracing, and observability. These tools allow us to maintain the responsiveness and correctness required for a real-time verification system. When you are generating cryptographic proofs at scale, visibility into latency distributions and error rates is not optional — it is how you keep promises to advertisers and publishers.

Databases

Our storage strategy combines PostgreSQL for transactional data, Redis for low-latency ephemeral state, and Redpanda as the backbone of our event stream. All engagement events — impressions, views, clicks, conversions — arrive anonymously and flow into Redpanda before being batched into Merkle trees and fed into the proof pipeline.

This architecture allows us to choose the right consistency or performance model for each type of workload. PostgreSQL anchors state that must not drift. Redis provides instant access to hot data and handles rate limiting during traffic spikes. Redpanda delivers a high-performance streaming layer that is central to our cryptographic verification model, moving events from capture through proof generation with minimal latency.

Data platform

We treat data as one of our most valuable assets — not for behavioral profiling, which we explicitly reject, but for verifiable measurement, fraud detection, and bidder intelligence. The distinction matters. Traditional adtech hoovers up user data to target and retarget. We process anonymous events to prove that real attention occurred.

All payloads follow Protocol Buffer schemas with custom annotations that dictate privacy handling and processing behavior. Events flow through Redpanda topics using a custom envelope format that allows multiple schema types to coexist efficiently within shared infrastructure.

We process these streams with a combination of Rust workers for latency-sensitive operations and Apache Beam pipelines for batch analytics. Long-term storage goes into Parquet files on Google Cloud Storage, which then power analytics workloads through Dataflow, Dataproc, or BigQuery depending on query patterns and latency requirements. This architecture enables both real-time and batch insights while maintaining strict verifiability and privacy boundaries throughout.

Build, test, deploy workflow

CI/CD sits at the heart of our development workflow. Hermetic builds, aggressive caching, and reproducible environments keep build and test cycles fast even as the monorepo grows. GitHub Actions and BuildBuddy work together to provide reliable, parallelized pipelines with detailed profiling. Deterministic outputs ensure that engineers receive fast, consistent feedback regardless of which part of the system they are working on.

We invest heavily in CI infrastructure because we have learned that slow feedback loops compound into slow everything else. When a developer can push a change and know within minutes whether it works, they stay in flow. When they wait thirty minutes, they context switch and lose momentum. Fast CI is not a luxury — it is how small teams move faster than large ones.

What we do not build

Equally important is what is absent from our stack. We have no user tracking infrastructure. No cross-site identifiers. No fingerprinting libraries. No data broker integrations. No behavioral profiles. These are not features we chose to skip — they are architecturally impossible given how our system works. You cannot accidentally add surveillance to a platform built on zero-knowledge proofs and anonymous event streams.

This is the point. Privacy is not a policy we follow. It is a property the system enforces.

What is next

We are evolving quickly and constantly refining our technical foundations. Some architectural decisions will inevitably be challenged as we scale, and we welcome that scrutiny.

We are actively exploring zkML to verify AI-based fraud detection models without exposing inputs or training data. We are investigating secure multiparty computation for encrypted preference vaults that let users express intent without revealing it to anyone, including us. Hardware acceleration for zero-knowledge proofs will become important as proof volume grows. Deeper integrations with DePIN infrastructure may allow us to decentralize verification itself, removing AdPriva as a single point of trust.

We are also watching Trusted Execution Environments closely. TEEs could provide additional guarantees in scenarios where even our own infrastructure should not see plaintext data. The goal is a system where privacy does not depend on trusting AdPriva — it depends on mathematics and hardware that anyone can verify.

The adtech stack was designed in an era before practical zero-knowledge proofs and modern cryptography. We are rebuilding it with tools that did not exist when the surveillance model became entrenched. The most interesting technical challenges are still ahead of us.