High Performance
Handle 50K+ messages/sec. Built with C++17, uWebSockets, and async PostgreSQL for minimal latency.
Unlimited ordered partitions that never block each other. Consumer groups, replay, transactional delivery — ACID-guaranteed.

Born from Real Production Needs
Queen was created at Smartness to power Smartchat - an AI-powered guest messaging platform for the hospitality industry.
At Smartness, we use Kafka extensively across our infrastructure and know it well. For Smartchat's message backbone, we initially chose Kafka for its strong reliability guarantees.
However, we encountered a use case mismatch: in Kafka, a single message processing delay affects the entire partition. For most workloads this isn't an issue, but our system involves:
With potentially 100,000+ concurrent chats, we would need a Kafka partition for each chat conversation - which isn't practical at that scale.
We started moving long-running operations to custom PostgreSQL queue tables. As we built out the system, we needed:
We realized we had built a complete message queue system that better fit our specific requirements.
Queen now handles Smartchat's message infrastructure:
Queen processes 100,000+ messages daily in production.
Technical Note
If you're building systems where message processing has inherently variable latency (chat systems, AI pipelines, human-in-the-loop workflows), Queen's partition model may be a better fit than traditional streaming platforms.
Start with Docker:
# Start PostgreSQL and Queen server
docker network create queen
docker run --name postgres --network queen \
-e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres
docker run -p 6632:6632 --network queen \
-e PG_HOST=postgres \
-e PG_PASSWORD=postgres \
-e NUM_WORKERS=2 \
-e DB_POOL_SIZE=5 \
-e SIDECAR_POOL_SIZE=30 \
-e SIDECAR_MICRO_BATCH_WAIT_MS=10 \
-e POP_WAIT_INITIAL_INTERVAL_MS=500 \
-e POP_WAIT_BACKOFF_THRESHOLD=1 \
-e POP_WAIT_BACKOFF_MULTIPLIER=3.0 \
-e POP_WAIT_MAX_INTERVAL_MS=5000 \
-e DEFAULT_SUBSCRIPTION_MODE=new \
-e LOG_LEVEL=info \
smartnessai/queen-mq:0.12.3
# Install JavaScript client
npm install queen-mq
# Or install Python client
pip install queen-mq
# Start building!And than use the client to push and consume messages:
import { Queen } from 'queen-mq'
// Connect
const queen = new Queen('http://localhost:6632')
// Create queue with configuration
await queen.queue('orders')
.config({ leaseTime: 30, retryLimit: 3 })
.create()
// Push messages with guaranteed order per partition
await queen.queue('orders')
.partition('customer-123')
.push([{ data: { orderId: 'ORD-001', amount: 99.99 } }])
// Consume with consumer groups for scalability
await queen.queue('orders')
.group('order-processor')
.concurrency(10)
.batch(20)
.autoAck(false)
.each()
.consume(async (message) => {
await processOrder(message.data)
})
.onSuccess(async (message) => {
await queen.ack(message, true, { group: 'order-processor' })
}).onError(async (message, error) => {
await queen.ack(message, false, { group: 'order-processor' })
})Queen MQ is released under the Apache 2.0 License.