Building a High-Performance UUID Generation Service in Go

    October 21, 2024
    11 min read
    Long read
    Tutorial
    uuid
    performance
    distributed-systems
    architecture

    Introduction

    UUIDs are a cornerstone of distributed system design — but when you're tasked with generating millions of UUIDs per second, efficiency is everything.

    In this article, we'll walk through building a high-performance UUID generation service in Go. From architecture to benchmarking to production deployment, this tutorial focuses on scalability, reliability, and throughput.


    Why Go for UUID Generation?

    Go (Golang) is an ideal choice for high-throughput microservices thanks to:

    • Lightweight Goroutines for handling concurrency
    • Compiled binary performance
    • Built-in HTTP server
    • Low memory overhead

    Let’s turn Go into a UUID-generating beast.


    Project Overview

    Here’s what we’re building:

    • An HTTP-based UUID microservice
    • REST API: GET /uuid and GET /uuids?count=1000
    • High throughput (~1M+ UUIDs/sec with batching)
    • Monitoring with Prometheus
    • Benchmarks using wrk and hey

    Step 1: Set Up the Go Project

    bash
    mkdir uuid-service && cd uuid-service
    go mod init uuid-service

    Install the UUID package:

    bash
    go get github.com/google/uuid

    Step 2: Basic UUID Handler

    Create main.go:

    go
    package main
    
    import (
        "fmt"
        "log"
        "net/http"
        "strconv"
        "github.com/google/uuid"
    )
    
    func uuidHandler(w http.ResponseWriter, r *http.Request) {
        countStr := r.URL.Query().Get("count")
        count := 1
        if countStr != "" {
            if c, err := strconv.Atoi(countStr); err == nil && c > 0 && c <= 10000 {
                count = c
            }
        }
    
        uuids := make([]string, count)
        for i := 0; i < count; i++ {
            uuids[i] = uuid.New().String()
        }
    
        for _, id := range uuids {
            fmt.Fprintln(w, id)
        }
    }
    
    func main() {
        http.HandleFunc("/uuid", uuidHandler)
        log.Println("UUID service listening on :8080")
        log.Fatal(http.ListenAndServe(":8080", nil))
    }

    Run the server:

    bash
    go run main.go

    Test it:

    bash
    curl http://localhost:8080/uuid
    curl http://localhost:8080/uuid?count=1000

    Step 3: Benchmarking with `wrk`

    Let’s see what this baby can do.

    bash
    wrk -t12 -c100 -d10s http://localhost:8080/uuid

    You should get results in the 500k–1M req/sec range on a decent machine, especially with proper CPU pinning.

    Want more? Add batching:

    • Generate all UUIDs into a single JSON array
    • Use application/json response type
    • Reduce flush time and I/O overhead

    Step 4: Performance Tuning

    a) Use a UUID Pool

    Avoid allocating memory on every call by reusing slices.

    b) Parallelize Bulk Generation

    go
    // use goroutines to generate in chunks

    c) Use Faster Libraries (Benchmarked)

    • github.com/google/uuid – stable, default choice
    • github.com/satori/go.uuid – older, known bugs
    • github.com/oklog/ulid – lexicographically sortable
    • github.com/rs/xid – compact, URL-friendly

    Test each library using go test -bench=. -benchmem.


    Step 5: Adding Prometheus Metrics

    Add the Prometheus client:

    bash
    go get github.com/prometheus/client_golang/prometheus

    Expose metrics:

    go
    import "github.com/prometheus/client_golang/prometheus/promhttp"
    
    http.Handle("/metrics", promhttp.Handler())

    Now scrape with Prometheus and monitor UUID throughput in Grafana.


    Step 6: Containerize It

    Create a Dockerfile:

    dockerfile
    FROM golang:1.20-alpine
    WORKDIR /app
    COPY . .
    RUN go build -o uuid-service
    CMD ["./uuid-service"]

    Build and run:

    bash
    docker build -t uuid-service .
    docker run -p 8080:8080 uuid-service

    Step 7: Scale Horizontally

    • Deploy behind a load balancer (e.g., NGINX, HAProxy, Envoy)
    • Use Kubernetes Horizontal Pod Autoscaler
    • Track saturation metrics (latency, error rates, CPU)

    If you're pushing millions of UUIDs per second, distribute traffic across nodes and regionally shard instances.


    Bonus: gRPC Support

    If you’re operating in a low-latency environment or service mesh, consider exposing a gRPC endpoint:

    proto
    service UUIDService {
      rpc GenerateUUID (UUIDRequest) returns (UUIDResponse);
    }

    gRPC offers:

    • Smaller payloads
    • Strong typing
    • Streaming support

    Final Thoughts

    UUID generation seems simple — until you're scaling it to planetary levels. With Go, you can build a lean, blazing-fast service that handles UUID creation with ease.

    Whether you're building a distributed ID service, a UUID generator for event streams, or just trying to eliminate collisions at scale — Go has your back.

    Just remember:

    • Benchmark everything
    • Avoid unnecessary allocations
    • Scale horizontally, monitor vertically

    Now go make your UUIDs at ludicrous speed 🚀

    Generate Your Own UUIDs

    Ready to put this knowledge into practice? Try our UUID generators:

    Generate a Single UUID

    Create a UUID with our fast, secure generator

    Bulk UUID Generator

    Need multiple UUIDs? Generate them in bulk

    Summary

    This detailed tutorial walks through creating a high-performance UUID generation microservice using Go, focusing on scalability, throughput, and distributed systems design to handle millions of requests per second.

    TLDR;

    This article is a deep dive into building a UUID generation microservice in Go capable of handling massive scale.

    Key points to remember:

    • Use Go’s native concurrency to handle thousands of simultaneous UUID requests
    • Benchmark different UUID generation libraries (e.g. google/uuid, satori/go.uuid)
    • Use connection pooling, batching, and horizontal scaling for optimal performance

    You’ll learn how to structure the service, expose endpoints, monitor metrics, and prepare it for real-world distributed workloads.

    Cookie Consent

    We use cookies to enhance your experience on our website. By accepting, you agree to the use of cookies in accordance with our Privacy Policy.