Skip to main content
Architecture
16 min read

Axon Framework Without Server: The Hidden Costs of Going DIY

Max van Anen - Profile Picture
By
Real-world lessons from implementing Axon Framework without the commercial server in enterprise environments
Axon FrameworkEvent SourcingSpring BootPostgreSQL
Axon Framework Without Server: The Hidden Costs of Going DIY - Featured article image for Maxzilla blog

Axon Framework is excellent software. Well-designed, thoroughly tested, and backed by a fantastic community. This article shares real-world experience implementing it without Axon Server: the challenges, the workarounds, and whether it's worth the effort.

What follows is a practical guide based on a production enterprise implementation. You'll find working code, battle-tested configurations, and hard-learned lessons about running Axon Framework with PostgreSQL instead of buying the commercial server.

The PostgreSQL Setup

The documentation makes it sound simple: swap the event store, point to PostgreSQL, done. Here's what you actually need:

001-add-axon-tables.xml
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"> <changeSet id="add axon tables" author="Backend Team"> <createSequence sequenceName="axon_association_value_seq" startValue="1" incrementBy="1"/> <createSequence sequenceName="axon_global_event_index_seq" startValue="1" incrementBy="1"/> <!-- Domain Event Entry Table --> <createTable tableName="axon_domain_event_entry"> <column name="global_index" type="bigint"> <constraints nullable="false" primaryKey="true" unique="true"/> </column> <column name="event_identifier" type="varchar(40)"> <constraints nullable="false"/> </column> <column name="meta_data" type="bytea"/> <column name="payload" type="bytea"> <constraints nullable="false"/> </column> <!-- ... more columns ... --> </createTable> <!-- Saga Tables --> <createTable tableName="axon_saga_entry"> <column name="saga_id" type="varchar(40)"> <constraints nullable="false" primaryKey="true"/> </column> <!-- ... --> </createTable> <!-- Token Store for Event Processors --> <createTable tableName="axon_token_entry"> <column name="processor_name" type="varchar(255)"> <constraints nullable="false" primaryKey="true"/> </column> <column name="segment" type="integer"> <constraints nullable="false"/> </column> <!-- ... token tracking ... --> </createTable> </changeSet> </databaseChangeLog>

Custom sequences, PostgreSQL-specific bytea types, complex indexes. And then you discover the default Hibernate dialect breaks with Axon's binary storage.

The NoToast Dialect Hack

PostgreSQL's TOAST mechanism for large objects conflicts with Axon's binary serialization. The fix:

NoToastPostgresSQLDialect.kt
class NoToastPostgresSQLDialect : PostgreSQLDialect(DatabaseVersion.make(13, 0)) { override fun columnType(sqlTypeCode: Int): String = if (sqlTypeCode == SqlTypes.BLOB) { "bytea" // Force PostgreSQL binary type } else { super.columnType(sqlTypeCode) } override fun castType(sqlTypeCode: Int): String = if (sqlTypeCode == SqlTypes.BLOB) { "bytea" } else { super.castType(sqlTypeCode) } override fun contributeTypes( typeContributions: TypeContributions, serviceRegistry: ServiceRegistry, ) { super.contributeTypes(typeContributions, serviceRegistry) val jdbcTypeRegistry = typeContributions.typeConfiguration.jdbcTypeRegistry jdbcTypeRegistry.addDescriptor(Types.BLOB, BinaryJdbcType.INSTANCE) } }

This prevents PostgreSQL from using TOAST for Axon's event payloads, avoiding serialization issues. Credit to the Axon community (including Allard Buijze himself) for help troubleshooting this.

Sequence Configuration

Critical detail: without proper sequence configuration, Hibernate creates gaps that break Tracking Event Processors:

axon-orm-mapping.xml
<?xml version="1.0" encoding="UTF-8"?> <entity-mappings version="1.0" xmlns="http://java.sun.com/xml/ns/persistence/orm"> <!-- Custom sequences to prevent gaps in event ordering --> <sequence-generator name="global_event_index_seq" allocation-size="1" sequence-name="axon_global_event_index_seq" initial-value="1"/> <sequence-generator name="association_value_seq" allocation-size="1" sequence-name="axon_association_value_seq" initial-value="1"/> <!-- Map Axon entities to custom table names --> <entity class="org.axonframework.eventsourcing.eventstore.jpa.DomainEventEntry"> <table name="axon_domain_event_entry"/> </entity> <entity class="org.axonframework.eventsourcing.eventstore.jpa.SnapshotEventEntry"> <table name="axon_snapshot_event_entry"/> </entity> <entity class="org.axonframework.eventhandling.tokenstore.jpa.TokenEntry"> <table name="axon_token_entry"/> </entity> <!-- ... more entity mappings ... --> </entity-mappings>

Without this, you get a single hibernate_sequence with gaps that break event processing. Subtle, but critical.

Production Configuration

The real complexity comes in production tuning:

AxonConfig.kt
@Configuration class AxonConfig { @Autowired fun configureEventProcessors( eventProcessingConfigurer: EventProcessingConfigurer, readModelTransactionManager: PlatformTransactionManager, @Value("${atlas.axon.segment-count-per-replica}") segmentsPerReplica: Int, @Value("${atlas.axon.replica-count}") replicaCount: Int, @Value("${atlas.axon.event-batch-size}") eventBatchSize: Int, @Value("${atlas.axon.worker-pool-size}") workerPoolSize: Int, ) { val defaultSequencingPolicy = SequentialPerAggregatePolicy.instance() PooledStreamingProcessorConfiguration { _, builder -> builder .initialSegmentCount(segmentsPerReplica * replicaCount) .maxClaimedSegments(segmentsPerReplica) .batchSize(eventBatchSize) .coordinatorExecutor { ScheduledThreadPoolExecutor(1, AxonThreadFactory("coordinator")) } .workerExecutor { ScheduledThreadPoolExecutor(workerPoolSize, AxonThreadFactory("worker")) } }.let { psepConfig -> eventProcessingConfigurer .registerDefaultTransactionManager { SpringTransactionManager(readModelTransactionManager) }.usingPooledStreamingEventProcessors(psepConfig) .registerDefaultSequencingPolicy { defaultSequencingPolicy } // Special handling for specific processing groups .registerSubscribingEventProcessor(TERMINAL_TIMEZONE_PROCESSING_GROUP) .registerTrackingEventProcessorConfiguration(EMPLOYEE_ASSIGNMENT_SAGA_GROUP) { config -> TrackingEventProcessorConfiguration .forSingleThreadedProcessing() .andInitialTrackingToken { config.eventStore().createTailToken() } } } } // XStream security configuration (because security vulnerabilities) @Autowired fun xStream(serializer: Serializer) { if (serializer is XStreamSerializer) { serializer.xStream.allowTypesByWildcard(arrayOf( "com.enterprise.**", "org.axonframework.**" )) } else { throw IllegalArgumentException("Serializer is not XStreamSerializer") } } // Custom transaction manager integration @Bean fun axonTransactionManager(eventStoreTransactionManager: PlatformTransactionManager) = SpringTransactionManager(eventStoreTransactionManager) }

Every parameter matters: segment counts, batch sizes, thread pools. Without Axon Server's automatic load balancing, you manually tune everything. Get it wrong and you have idle processors or overwhelmed instances.

The Polling Problem

Warning: Axon Framework generates hundreds of transactions per second even when completely idle.The polling comes from:

  • Tracking Event Processors continuously checking for new events
  • Saga managers polling for timeout events
  • Event processing retries and error handling mechanisms
  • Token store updates for tracking processor positions

The 1-2 Million Event Wall

Performance drops to 250 events/second after 1 million events. PostgreSQL's TOAST creates OID churn for every binary event payload and token update. The domain_event_entry table grows, but so does PostgreSQL's internal TOAST table. By 2 million events, you're in serious DBA territory or migrating to Axon Server.

Required PostgreSQL Tuning

  • Connection pooling for constant polling (not just peaks)
  • Custom indexes for Axon's query patterns
  • Aggressive vacuum for high-churn token tables
  • WAL tuning for continuous writes
  • Lock contention mitigation

Your DBA needs to understand event sourcing, not just OLTP.

Monitoring (The Good News)

AxonIQ Console works without Axon Server. Plus excellent Prometheus and OpenTelemetry integration:

application.yml
# Axon Framework configuration axon: axonserver.enabled: false # Disable Axon Server metrics: micrometer: dimensional: true # Enable dimensional metrics # Management and monitoring management: server: port: 9090 # Separate management port endpoints: web: exposure: include: health, info, prometheus endpoint: prometheus: enabled: true health: show-details: always # OpenTelemetry tracing configuration opentelemetry: resource-attributes: service.name: ${spring.application.name} tracing: sampling: probability: 1.0 propagation: type: w3c # AxonIQ Console integration (production) axoniq: console: application-name: ATLAS-PROD credentials: ${AXONIQ_CONSOLE_TOKEN} error-mode: FULL # Full error tracking and monitoring

You get metrics for event processing rates, command throughput, segment distribution, error tracking, and aggregate loading. The monitoring is genuinely excellent, better than most event sourcing implementations.

The Real Cost

Engineering Time

  • Initial setup: Days to weeks
  • Ongoing tuning: Continuous
  • Custom tooling: Substantial
  • PostgreSQL expertise: Mandatory

Operational Reality

  • Specialized knowledge required
  • Complex upgrade paths
  • Bus factor risk
  • Performance bottlenecks at scale

Architectural Lock-In

Once you're this deep, Axon is your architecture. Event serialization, aggregate lifecycle, sagas, custom PostgreSQL, monitoring. You're not using a library, you're married to it.

When DIY Makes Sense

  • Existing PostgreSQL expertise + zero infrastructure budget
  • Strict data residency requirements
  • Deep customization needs
  • Learning exercise

Your Options

1. Buy Axon Server

Just buy it. The license costs less than the engineering time you'll burn. You get zero-config event storage, automatic load balancing, and professional support.

2. Go Full DIY

If you must, commit fully. Use our configs, expect pain, budget for PostgreSQL experts.

3. Consider Alternatives

  • EventStore: Purpose-built, transparent pricing
  • Kafka: Event streaming with custom projections
  • Cloud services: EventBridge, Pub/Sub, Event Hubs
  • Traditional architecture: Sometimes CRUD is enough

Bottom Line

Our PostgreSQL implementation works in production. It's also a constant operational burden that required weeks of engineering time to build and requires ongoing expertise to maintain.

If you're trying to save money on licensing, you're optimizing the wrong metric. The license cost is a fraction of what you'll spend on engineering time and operational overhead.

Choose your battles. Sometimes paying for tools that work is the most pragmatic architectural decision.

For more insights on event-driven architectures, see our detailed analysis on when Event Sourcing is the right choiceand when traditional approaches work better.