Log Query Guide
The operations platform provides log query capabilities based on Loki. Under the "Logs" group on the left, there are two panels, separated by data source:
| Panel | Data Source | Main Stream Labels | Use Case |
|---|---|---|---|
| Container Console | Docker container stdout/stderr (collected by alloy) | container / service_name / log_source=docker | Troubleshoot middleware and infrastructure logs (MySQL, Redis, Kafka containers, etc.) |
| Service Logs | Structured JSON logs from HAP internal microservices (serilog pushed directly to Loki) | service_name / hostname / action / level / extras / stack | Troubleshoot business calls, exception stack traces, specific parameters (phone numbers, user IDs, etc.) |
Common Filters
Both panels have at the top:
- Search keyword (textbox): full-text match (case-insensitive)
- Multi-select dropdown filters: Container Console has "Container", Service Logs has "Service" + "Level"
The time window defaults to the last 30 minutes and can be adjusted in the top right corner.
Container Console
Typical Usage
View logs for a specific container
- Select the container name from the "Container" dropdown (e.g.,
milvus-etcd,script-app-1) - The list shows recent stdout output for that container
Filter by keyword
- Enter the content to search in "Search keyword"
Service Logs
Auto-expanded Stream Labels
Each log entry in the Service Logs panel automatically expands structured fields below the original line:
2026-05-13 17:43:22.911 {"Message":"SMS service not configured","level":"error"}
⏵ action: /MD.SmsService.Sms/SendMessage
⏵ level: error
⏵ elapsed_ms: 0
⏵ extras: [[Mobile, +8618596683881], [Message, Your verification code is 144249...]]
⏵ stack: SMS service not configured
at MD.Sms.GrpcService.Implements.SmsImplement.PreValidAsync...
| Stream Label | Description |
|---|---|
service_name | HAP microservice name (e.g., smssenderservice, workflow, worksheetservice) |
action | Business call method (gRPC interface path or Java/Net class method) |
detected_level | Log level (info / warn / error) |
elapsedmilliseconds | Call duration (milliseconds) |
extras | Business parameters (request/response details, often containing phone numbers, IDs, etc.) |
stack | Exception call stack (complete .NET / Java stack trace) |
hostname | Container hostname where the service runs |
Typical Query Scenarios
1. Filter by service
Select smssenderservice from "Service" dropdown → view only SMS service related logs.
2. Search for phone number / user ID / traceID
Enter 18596683881 in "Search keyword" → single query matches both log body and all stream labels (phone numbers are usually in extras rather than the body).
3. View errors only
Select error from "Level" dropdown, or enter error in "Search keyword".
4. Cross-service trace tracking
Search for traceID keyword (e.g., traceID=4a45ebccbdd91418) to see call chain logs across all related services.
Loki Data Retention Period
Default retention is 30 days (ENV_LOKI_RETENTION=720h), configurable in ops.yaml / ConfigMap.
Advanced LogQL
For more complex queries (aggregation, statistics, complex regex), go to Grafana's Explore view (Grafana icon in bottom left → Explore) to write LogQL directly. Examples:
{service_name="smssenderservice", detected_level="error"}— all errors from SMS service{hostname=~".+"} |~ "(?i)18596683881"— search phone number across all HAP microservicessum(rate({hostname=~".+"}[5m])) by (service_name)— log rate per service in last 5 minutes{container=~"milvus-.*"} |= "error" | json— milvus container error logs with JSON parsing
For more syntax, refer to the LogQL official documentation.