DynamoDB
Overview
MBC CQRS Serverless uses DynamoDB as its primary data store, implementing CQRS and Event Sourcing patterns through a structured table design. Understanding the table structure is essential for building efficient applications.
Table Architecture
In the MBC CQRS Serverless, DynamoDB tables are organized into the following types:
Entity Tables
| Table Type | Naming Convention | Purpose |
|---|---|---|
| Command Table | entity-command | Stores write commands (write model) |
| Data Table | entity-data | Stores current state (read model) |
| History Table | entity-history | Stores all versions for event sourcing |
System Tables
| Table | Purpose |
|---|---|
tasks | Stores information about long-running asynchronous tasks |
sequences | Holds sequence data for ID generation |
Table Definition
Table definitions are stored in the prisma/dynamodbs folder. To add a new entity table:
Step 1: Define Table in Configuration
Add the table name to prisma/dynamodbs/cqrs.json:
["cat", "dog", "order"]
Step 2: Run Migration
For local development:
# Migrate DynamoDB tables only
npm run migrate:ddb
# Migrate both DynamoDB and RDS
npm run migrate
Key Design Patterns
Standard Key Structure
All tables use a composite primary key consisting of:
| Key | Format | Example |
|---|---|---|
pk | TYPE#tenantCode | ORDER#ACME |
sk | TYPE#code | ORDER#ORD-000001 |
Entity Key Examples
// Order entity
const orderKey = {
pk: `ORDER#${tenantCode}`,
sk: `ORDER#${orderId}`,
};
// User entity
const userKey = {
pk: `USER#${tenantCode}`,
sk: `USER#${userId}`,
};
// Hierarchical data (e.g., organization)
const departmentKey = {
pk: `ORG#${tenantCode}`,
sk: `DEPT#${parentId}#${deptId}`,
};
Table Attributes
Common Attributes
All entity tables share these common attributes:
| Attribute | Type | Description |
|---|---|---|
pk | String | Partition key |
sk | String | Sort key |
id | String | Unique identifier (pk#sk hash) |
code | String | Business code |
name | String | Display name |
tenantCode | String | Tenant identifier |
type | String | Entity type |
version | Number | Version for optimistic locking |
attributes | Map | Custom entity attributes |
createdBy | String | Creator user ID |
createdIp | String | Creator IP address |
createdAt | String | Creation timestamp (ISO 8601) |
updatedBy | String | Last modifier user ID |
updatedIp | String | Last modifier IP address |
updatedAt | String | Last update timestamp (ISO 8601) |
Command-Specific Attributes
| Attribute | Type | Description |
|---|---|---|
source | String | Command source identifier |
requestId | String | Request tracking ID |
History-Specific Attributes
| Attribute | Type | Description |
|---|---|---|
seq | Number | Sequence number in history |
Secondary Indexes
Adding Global Secondary Indexes
The default table configuration does not include GSIs. You can add them based on your query patterns. A common pattern is adding a code-index for fast lookups by business code:
Example GSI definition (add to your table configuration):
{
"GlobalSecondaryIndexes": [
{
"IndexName": "code-index",
"KeySchema": [
{ "AttributeName": "tenantCode", "KeyType": "HASH" },
{ "AttributeName": "code", "KeyType": "RANGE" }
],
"Projection": { "ProjectionType": "ALL" }
}
]
}
Example usage with custom GSI:
// Find entity by code (requires code-index GSI)
const params = {
TableName: 'entity-data',
IndexName: 'code-index',
KeyConditionExpression: 'tenantCode = :tenant AND code = :code',
ExpressionAttributeValues: {
':tenant': tenantCode,
':code': entityCode,
},
};
Best Practices
Key Design
- Keep partition keys broad: Distribute data evenly across partitions
- Use hierarchical sort keys: Enable efficient range queries
- Include tenant in partition key: Ensure data isolation
Query Optimization
- Use Query over Scan: Always use partition key in queries
- Limit result sets: Use pagination for large datasets
- Project needed attributes: Only retrieve required fields
Capacity Planning
- Use on-demand capacity: Recommended for unpredictable workloads
- Monitor consumed capacity: Set up CloudWatch alarms
- Consider DAX: For read-heavy workloads requiring microsecond latency
Local Development
DynamoDB Local
The framework includes DynamoDB Local for development:
# Start DynamoDB Local (included in docker-compose)
docker-compose up -d dynamodb-local
# Access DynamoDB Local Admin UI
open http://localhost:8001
Environment Variables
# Local DynamoDB endpoint
DYNAMODB_ENDPOINT=http://localhost:8000
DYNAMODB_REGION=ap-northeast-1
Related Documentation
- Key Patterns: Detailed key design strategies
- Entity Patterns: Entity modeling guidelines
- Sequence: Sequence ID generation
- CommandService: Command handling and data sync