Design Philosophy: Demonstrating that well-architected monolithic services eliminate the need for microservice complexity through strategic pattern implementation and operational discipline.

For my full-time job at Dakota, I designed a new financial services repository pattern. After much thought, its resulting design largely serves as a counter-argument to the prevailing microservices orthodoxy. The architecture demonstrates that introducing network boundaries between system components—the fundamental characteristic of microservices—often creates more problems than it solves. Instead, I’ve focused on patterns that provide all the benefits typically attributed to microservices (modularity, testability, team independence) while maintaining the operational simplicity and performance characteristics of a well-designed monolith.

Unfortunately, because it is proprietary and highly sensitive, I cannot share it in full. Instead I share its overarching directory structure and various insights I learned from it. From it also spawned ~train/standard, which I encourage you to inspect as you consider the content of this post.

Directory Structure: Modular Organization Without Distribution

Strategic Package Architecture: The repository demonstrates clear separation between internal implementation and external interfaces, providing microservices-style modularity within a single deployment unit:

internal/       # Implementation details, private to service
├── api/        # HTTP server and middleware
├── db/         # Database interface implementations  
├── cron/       # Background job coordination
└── managers/   # Business logic orchestration

pkg/            # Public interfaces, externally consumable
├── core/       # Domain transfer objects and business logic
└── cache/      # Abstraction interfaces for generic things

cmd/            # Application entry points
└── main.go     # Single binary with all dependencies

db/             # Schema evolution
└── migrations/ # Up and down migration files

This is largely a riff on the Standard Go Project Layout, which I think is generally excellent.

This organization achieves the bounded context benefits typically cited for microservices while eliminating deployment coordination complexity. Each domain maintains clear interfaces and could theoretically become a separate service, but the default assumption is that process boundaries provide sufficient isolation for most business requirements. Cache is called out here because horizontal scaling necessitates some shared location to record what work (in-memory crons, for example) has been done already, and for inbound request idempotency.

Package Boundaries as Service Boundaries: The architecture mirrors what would become separate services in a microservices design, but maintains the performance and consistency advantages of shared memory and single-process deployment.

Foundation: Environment Configuration and Graceful Shutdown

All Environment Variables in Main: I centralize every environment variable read within the main() function scope, creating a single source of truth for configuration dependencies.

// Every configuration dependency explicit at startup
stytchProjectID := os.Getenv("STYTCH_PROJECT_ID")
stytchSecret := os.Getenv("STYTCH_SECRET_GLOBAL")
platformBaseURL := os.Getenv("PLATFORM_BASE_URL")
platformAPIKey := os.Getenv("PLATFORM_API_KEY")

This eliminates the configuration chaos typical in distributed systems where services discover missing environment variables at runtime, often in production. The pattern forces architectural honesty: if a service needs external configuration, that dependency becomes immediately visible at startup.

Graceful Shutdown from Day One: Rather than retrofitting shutdown handling, I implement coordinated termination from the first commit:

// Signal handling with timeout-bounded shutdown
stop := make(chan os.Signal, 1)
signal.Notify(stop, os.Interrupt, syscall.SIGTERM)

<-stop
cancel() // Signal all services to begin shutdown

shutdownCtx, shutdownCancel := context.WithTimeout(
    context.Background(),
    15*time.Second,
)

// All components' Close() methods are called here...

Microservices architectures typically defer this complexity to orchestration platforms, but the fundamental problem remains: coordinating shutdown across system boundaries is inherently more complex than coordinating shutdown within process boundaries. The monolithic approach provides deterministic resource cleanup without network coordination overhead.

Cron Pattern: Avoiding Premature Distribution

Redis-Coordinated Background Jobs: Instead of immediately reaching for Kubernetes CronJobs or separate worker services, I implement distributed job coordination within the application boundary:

func (c *Cron) executeJobIfDue(ctx context.Context, job *job) {
    lockKey := redisLockPrefix + job.name
    
    // Distributed coordination without service boundaries
    gotLock, err := c.redis.SetNX(
        ctx,
        lockKey,
        now.Format(time.RFC3339),
        lockDuration,
    )
}

This pattern provides the coordination guarantees needed for horizontal scaling without the operational complexity of managing separate services. The job definitions remain co-located with the business logic they support, eliminating the version skew and deployment coordination problems typical in microservices architectures.

Transition Strategy: When the system eventually requires Kubernetes-native job scheduling, the transition becomes trivial—extract the job functions to separate binaries while maintaining the same Redis coordination logic. The monolithic design doesn’t prevent this evolution; it simply delays the complexity until it’s justified by actual requirements rather than architectural assumptions. Our collective allergy to premature optimization should extend to this matter.

Core Domain Types: Interface Design Over Network Boundaries

Comprehensive DTO Architecture: I organize all domain entities within pkg/core/, creating type definitions that serve as contracts between system components:

type User struct {
    ID  UserID
    Org Org  // Contextual association, not foreign key
    
    FirstName string
    LastName  string
    Email     string
    
    Role   perms.Role
    Status UserOrgStatus
    
    UpdatedAt time.Time
    DeletedAt *time.Time
}

// Business logic embedded within domain boundaries
func (c *User) HasPerm(p perms.Perm) bool {
    return c.Role.Has(p)
}

Note the absence of JSON tags to discourage accidental use of a DTO at the API layer, which would be a terrible mistake. At the cost of some mapping functions that sometimes feel redundant, internal logic can evolve without impacting the API layer in cumbersome ways, and many business logic changes can be done without thinking about the API layer at all.

The microservices equivalent would require serialization overhead, network traversal, and partial failure handling for every permission check. Worse still: often it involves a web of dependency imports. I embed business logic directly within domain types, eliminating the performance penalty and complexity of remote procedure calls for fundamental operations, and keeping dependencies out of it.

KSUID Integration: All identifiers use KSUIDs, providing distributed coordination properties without requiring service-to-service communication for ID generation, and eliminating the need for most created_at database fields:

func (id *UserID) Scan(value interface{}) error {
    ksuidVal, err := ksuid.Parse(str)
    if err != nil {
        return fmt.Errorf("cannot parse %q as KSUID: %w", str, err)
    }
    *id = UserID(ksuidVal)
    return nil
}

This enables arbitrary horizontal scaling, just as UUIDs do, while providing ordering and taking up less storage space. It is just as easy to work with KSUIDs than UUIDs. This is a minor point.

Database Interface: Modular Without Distribution

Domain-Specific Interface Composition: I organize database operations through interface composition, letting a DB interface grow to arbitrary size without becoming cumbersome:

type DB interface {
    AdminOperations
    APILogs
    FeatureFlags
    Events
    Orgs
    Users
    WebhooksInbound
    
    Close(ctx context.Context) error
}

Each domain interface (Users, Orgs, Events) could theoretically become a separate service or separate database interface, but the current architecture provides all the benefits typically cited for service separation:

  • Team Independence: Different teams can implement domain-specific interfaces independently
  • Bounded Contexts: Clear separation between business domains
  • Testing Isolation: Mock implementations target specific domain interfaces

And in Go, it is still possible to pass a single interface through to a specific struct if you want to guarantee it cannot interact with resources outside its scope. (99% of the time, though, this is more trouble than it’s worth.)

Query Result Consistency: I enforce a three-value return pattern across all getter methods:

// Eliminates "not found vs. error" ambiguity without HTTP status codes
func (db *postgresDB) GetUserInOrg(
    ctx context.Context,
    userID core.UserID,
    orgID core.OrgID,
) (*core.User, bool, error)

The first return value is the resource, the second is a bool indicating whether it was found, and the third is an error to be emitted in all cases other than no rows in result set, as it were.

Microservices architectures typically encode this information in HTTP status codes or use e.g. gRPC errors that must be inspected, both of which are tedious to handle. The monolithic approach maintains semantic clarity without dealing with any of that.

Development Workflow: Monolithic Advantages

Makefile-Driven Development: I provide comprehensive development environment automation through Make targets:

# This automatically loads all environment variables from .env into the Make
# environment, available to all targets and subprocesses.
ifneq (,$(wildcard .env))
include .env
export
endif

all: help

help: ## Display target list
	@echo "Project Makefile\n\nCommon commands:"
	@grep -E '^[a-zA-Z0-9_-]+:.*?## .*$$' Makefile | sort | \
		awk 'BEGIN {FS = ":.*?## "}; {printf "  %-15s %s\n", $$1, $$2}'
	@echo ''

up: prepare start ## Complete environment initialization
prepare: db migrate redis ## Dependency coordination
init: down up # Complete reset without external orchestration

This Makefile setup allows it to generate its own help text for each target, and to handle a .env file natively, in very few lines. If it ain’t broke, don’t fix it, and for decades the Makefile has proven an enduring standard, more powerful than most developers know.

By contrast, microservices development typically requires Docker Compose files, service mesh configuration, and complex startup orchestration. Even abstraction behind a Makefile is painful in these situations and slows the feedback loop for every change. This is one of the most profound wastes of time in our industry! The monolithic approach eliminates this complexity and keeps local development coherent.

More on Docker: particularly in an industry setting where everyone is on a Mac, Docker is heavy and should be avoided. My general thesis is that if you need to use Docker as part of the typical local development experience of a project, you have already reached a point of project complexity that should be reserved for the largest companies in the world, and have therefore probably made a mistake.

I emphasize build-and-run speed from zero in a local environment, on bare metal only. Other useful Makefile targets include deps, which should install all dev and runtime dependencies for a local machine (assuming the presence of brew or a consistent Linux distro; teams that lack consistency across dev machines ought to resolve that problem first) and ready, executed via a pre-commit hook (itself installed through make hooks), which regenerates all generated files, runs tests, lints the code, etc. thus automatically enforcing standards.

Testing Strategy: Dual testing approach distinguishing unit tests from integration tests:

unit: ## Fast feedback loop
    go test ./... -short

test: ## Complete system validation
    go test ./...

The monolithic architecture enables true integration testing without network mocking or service virtualization, and allows trivial port-forwarding to be added when outside dependencies do exist. Most tests exercise actual business logic paths rather than HTTP contract adherence.

Integration tests are often unnecessary in monolithic patterns, except for matters related to external data stores.

OpenAPI-First Development: API specification drives code generation, but the generated interfaces remain within process boundaries:

openapi: ## Generate server interfaces
    go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen \
        --config=internal/api/oapi/oapi-codegen.yaml openapi.yaml

This is ideally run as a pre-commit hook at all times, ensuring no drift between specification and the project code.

Authorization Pattern: The AuthGate Function

Centralized Authorization Logic: Rather than distributing permission checks across multiple services, I implement a unified authorization pattern through the AuthGate function:

// AuthGate provides consistent authorization across all handlers
func AuthGate(ctx context.Context, perm perms.Perm) (
    orgID core.OrgID,
    orgFound bool,
    hasPerm bool,
) {
    user, ok := GetUser(ctx)
    if !ok || user.Org.ID.IsNil() {
        return core.OrgID{}, false, false
    }
    
    orgID, orgFound, hasPerm = user.Org.ID, true, user.HasPerm(perm)
    return
}

Consistent Handler Pattern: Every protected endpoint follows the same authorization pattern, eliminating the security inconsistencies typical in microservices architectures:

func (s *Server) ListTransactions(
    ctx context.Context,
    request oapi.ListTransactionsRequestObject,
) (oapi.ListTransactionsResponseObject, error) {
    _, orgFound, hasPerm := middleware.AuthGate(ctx, perms.ReadTxns)
    if !orgFound {
        return oapi.ListTransactions401JSONResponse{
            Code: errorcodes.Unauthorized.String(),
        }, nil
    }
    if !hasPerm {
        return oapi.ListTransactions403JSONResponse{
            Code: errorcodes.Forbidden.String(),
        }, nil
    }
    // ... business logic
}

This pattern provides several advantages over distributed authorization:

  • Single Source of Truth: All permission logic resides in one location, eliminating the synchronization problems inherent in distributed security systems
  • Performance: Authorization checks execute in microseconds rather than requiring network calls to separate authorization services
  • Consistency: Every handler follows identical patterns, reducing the likelihood of security bypasses
  • Debuggability: Authorization failures can be traced through single-process debugging rather than distributed tracing

Context-Based User Resolution: Prior to AuthGate, as implied by it, some middleware automatically resolves authenticated users and injects them into request context, supporting both API key and JWT authentication without requiring service-to-service communication for user lookups.

This is doable in microservice patterns, but typically either with more complex middleware patterns that hide how authentication works, or often with the mistake of needing to check in with a user service separate from the API layer on every request. The monolithic approach eliminates both problems while maintaining the flexibility to extract authorization logic into a separate service if future requirements justify the complexity.

There is a cost to this pattern: every handler must call AuthGate. But this is actually great because it is easy to bypass auth if, as all projects inevitably do, some endpoints are unauthenticated. More importantly, it is a death knell for the comprehensibility of your API layer as soon as the middleware executed differs from endpoint to endpoint, so it’s much easier to put the context for the request into the request context, and then let the handler handle what happens. See how that works?

Performance Characteristics

Elimination of Network Overhead: Function calls replace HTTP requests for inter-component communication, avoiding unnecessary network latency for fundamental operations.

Transactional Consistency: Distributed database transaction handling is so frustrating to implement that most projects skip it. These projects almost inevitably face bizarre data inconsistencies at scale. With a monolith and single DB interface, we are finally ready to use Postgres as God intended:

// Atomic business operations without 2PC
tx, err := db.pool.Begin(ctx)
// Multiple domain operations within single transaction
if err := tx.Commit(ctx); err != nil {
    return fmt.Errorf("failed to commit transaction: %w", err)
}

In short, it is profoundly easier to ensure database operations are properly transactional when you do not distribute state changes across network boundaries. The distinction is, in fact, between trivial on one side and totally unworkable on the other. The microservice pattern nearly killed sensible database transaction patterns, making databases much more error-prone for everyone; this is silly and easily fixed.

Memory Efficiency: Shared process memory eliminates serialization overhead between system components. Domain objects pass by reference rather than requiring JSON marshaling for every inter-service communication.

It doesn’t matter whether one aspect of a monolith uses 80% of its memory, so long as you can horizontally scale it all the same.

Conclusion: Strategic Monolithic Design

The astute reader will notice that I emphasize for developer comprehensibility, even in the age of AI. This is true. There may come a day when developer comprehensibility no longer matters because the AIs are so far beyond us, but (a) I’m skeptical of that, and (b) if that becomes the case, we’re all out of a job anyway.

If the developer doesn’t have to worry about the rudiments, all development is better, less stressful, and velocity increases massively. The fundamental mistake that most projects make is, for whatever reason, to not care enough about setting up the rudiments correctly at the start. This is because the benefits are multiplicative and it makes the most sense to establish multiplicative gains at the start.

Often the excuse is time, but I argue that proper patterns take functionally no additional time on the margin as of 2025. A template example plus an LLM can prove sufficient for even the most frantic startup’s development schedule.

This repository highly optimizes for this, and intends to demonstrate that well-architected monolithic services provide the modularity, testability, and team independence benefits typically attributed to microservices, while eliminating the network latency, operational complexity, and consistency challenges inherent in distributed architectures.

The key insight is that indirection through network boundaries is not automatically beneficial. Most systems do not require the independent scaling, technology diversity, or fault isolation that justify microservices complexity. Instead, strategic interface design within process boundaries provides superior development velocity, operational simplicity, and performance characteristics.

Like many other patterns in our industry, microservices are a specific pattern to be selected in specific situations, and should not be chosen due to being trendy. Thankfully the trend is dying down now. As usual, we engineers often reach for the cool thing, not the boring known-quantity. But most common problems are now solved problems.

The architecture remains decomposable—each domain interface could become a separate service if future requirements justify the additional complexity. However, the default assumption is that monolithic design patterns provide sufficient modularity for most business requirements while maintaining the operational advantages of single deployment units.

Microservices solve real problems, but those problems are less common than the industry consensus suggests. This repository provides a template for systems that achieve enterprise reliability and maintainability through disciplined monolithic architecture rather than distributed system complexity.

I regret that the full content of most files is proprietary and I cannot show the full design. Nonetheless, I hope this post, in addition to my standard template linked atop the post, suffices for now.