Skip to content
Merged

0.8.4 #174

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions .github/workflows/publish-images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -204,3 +204,17 @@ jobs:
echo "docker pull logtide/frontend:${{ needs.prepare.outputs.version }}" >> $GITHUB_STEP_SUMMARY
echo "docker pull ghcr.io/${{ github.repository_owner }}/logtide-frontend:${{ needs.prepare.outputs.version }}" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY

helm-update:
name: Update Helm Chart
needs: [prepare, merge]
runs-on: ubuntu-latest
if: needs.prepare.outputs.is_stable == 'true'
steps:
- name: Trigger helm chart update
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.HELM_CHART_PAT }}
repository: logtide-dev/logtide-helm-chart
event-type: logtide-release
client-payload: '{"version": "${{ needs.prepare.outputs.version }}"}'
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,25 @@ All notable changes to LogTide will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## [0.8.4] - 2026-03-19

### Added
- **Skeleton loaders and loading overlays**: all dashboard pages now show content-shaped loading states instead of blank spinners
- New `Skeleton`, `SkeletonTable`, and `TableLoadingOverlay` components (`src/lib/components/ui/skeleton/`)
- Directional shimmer animation via `@keyframes shimmer` using design tokens — works in light and dark mode, disabled for `prefers-reduced-motion`
- **Initial load** (no data yet): animated skeleton rows mirroring the page layout — stat cards on `/dashboard`, project cards on `/dashboard/projects`, table rows on search, traces, errors, admin tables, incidents, alerts history, and members
- **Re-fetch** (filter change, pagination): existing content dims with a translucent overlay and centered spinner, preventing layout shift and context loss
- Pages updated: `/dashboard`, `/dashboard/search`, `/dashboard/projects`, `/dashboard/alerts`, `/dashboard/errors`, `/dashboard/traces`, `/dashboard/security`, `/dashboard/security/incidents`, `/dashboard/admin/organizations`, `/dashboard/admin/users`, `/dashboard/admin/projects`, `/dashboard/settings/members`
- Automated Helm chart releases: every stable Docker image release now triggers a `repository_dispatch` to `logtide-dev/logtide-helm-chart`, which auto-bumps `appVersion` and chart `version` (patch), commits, and publishes a new chart release to the Helm repo on GitHub Pages

### Fixed
- API 400 responses now include a `details` array with field-level validation errors instead of just a generic message. Covers both Fastify/AJV schema validation and Zod validation errors (including uncaught `ZodError` that previously returned 500)
- Admin pages returned 502 Bad Gateway on direct load/reload: the admin layout (`+layout@.svelte`) breaks out of the dashboard layout chain, so `ssr = false` was not inherited; added a dedicated `+layout.ts` to the admin section
- `/dashboard/admin/projects/[id]` crashed with "Something went wrong" due to `formatDate` being called but not defined (function was named `formatTimestamp`)
- `POST /api/v1/logs/identifiers/batch` slow: the route was calling `reservoir.getByIds` (hitting ClickHouse/TimescaleDB/MongoDB) only to verify project access, then querying `log_identifiers` (PostgreSQL) separately. Since `log_identifiers` already stores `log_id → project_id` + identifier data, the storage engine call is now bypassed entirely — one PostgreSQL query replaces the N×storage-engine-roundtrips loop. Added bloom filter skip index on `id` in ClickHouse and a standalone `id` index in TimescaleDB (migration 032) for `getByIds` used by `findCorrelatedLogs`
- `GET /api/v1/logs/hostnames` taking 8+ seconds: the 6h window cap was only applied when `from` was absent — explicit `from` params (e.g. 24h range from the search page) bypassed it and triggered a full-range metadata scan; cap now clamps any window to 6h max. Added `limit: 500` to the distinct call. Per-engine optimizations: **ClickHouse** adds a `hostname` materialized column (computed at ingest, eliminates `JSONExtractString` at query time) and uses it directly in distinct queries; **TimescaleDB** adds a composite expression index `(project_id, (metadata->>'hostname'), time)` (migration 032); **MongoDB** adds a sparse compound index on `metadata.hostname`. All three engines also now extract the metadata field in a subquery (once per row vs 3×)

## [0.8.3] - 2026-03-18

### Added
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,14 @@
<a href="https://codecov.io/gh/logtide-dev/logtide"><img src="https://codecov.io/gh/logtide-dev/logtide/branch/main/graph/badge.svg" alt="Coverage"></a>
<a href="https://hub.docker.com/r/logtide/backend"><img src="https://img.shields.io/docker/v/logtide/backend?label=docker&logo=docker" alt="Docker"></a>
<a href="https://artifacthub.io/packages/helm/logtide/logtide"><img src="https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/logtide" alt="Artifact Hub"></a>
<img src="https://img.shields.io/badge/version-0.8.3-blue.svg" alt="Version">
<img src="https://img.shields.io/badge/version-0.8.4-blue.svg" alt="Version">
<img src="https://img.shields.io/badge/license-AGPLv3-blue.svg" alt="License">
<img src="https://img.shields.io/badge/status-stable_alpha-success.svg" alt="Status">
</div>

<br />

> **🚀 RELEASE 0.8.3:** LogTide now supports **Multi-Engine Storage** (ClickHouse, MongoDB) and **Advanced Browser Observability**.
> **🚀 RELEASE 0.8.4:** LogTide now supports **Multi-Engine Storage** (ClickHouse, MongoDB) and **Advanced Browser Observability**.

---

Expand All @@ -46,7 +46,7 @@ Designed for teams that need **GDPR compliance**, **full data ownership**, and *
### Logs Explorer
![LogTide Logs](docs/images/logs.png)

### Performance & Metrics (New in 0.8.3)
### Performance & Metrics (New in 0.8.4)
![LogTide Metrics](docs/images/metrics.png)

### Distributed Tracing
Expand Down Expand Up @@ -87,7 +87,7 @@ We host it for you. Perfect for testing. [**Sign up at logtide.dev**](https://lo

---

## ✨ Core Features (v0.8.3)
## ✨ Core Features (v0.8.4)

* 🚀 **Multi-Engine Reservoir:** Pluggable storage layer supporting **TimescaleDB**, **ClickHouse**, and **MongoDB**.
* 🌐 **Browser SDK Enhancements:** Automatic collection of **Web Vitals** (LCP, INP, CLS), user session tracking, and click/network breadcrumbs.
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "logtide",
"version": "0.8.3",
"version": "0.8.4",
"private": true,
"description": "LogTide - Self-hosted log management platform",
"author": "LogTide Team",
Expand Down
25 changes: 25 additions & 0 deletions packages/backend/migrations/032_hostname_index.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
-- Migration 032: Composite expression index for hostname lookups (TimescaleDB)
--
-- Migration 023 tried a standalone expression index on (metadata->>'hostname') but
-- TimescaleDB planner preferred seq scan because it had no way to narrow by project_id.
--
-- A composite index (project_id, hostname_expr, time) lets the planner do an index
-- range scan scoped to a single project, then read distinct hostname values directly
-- from the index without touching row data.
--
-- Note: ClickHouse and MongoDB handle this via engine-level changes in reservoir
-- (materialized column and compound index respectively). This migration only applies
-- to TimescaleDB instances.
--
-- Note: CONCURRENTLY is not supported on TimescaleDB hypertables.

CREATE INDEX IF NOT EXISTS idx_logs_project_hostname
ON logs (project_id, (metadata->>'hostname'), time DESC)
WHERE metadata->>'hostname' IS NOT NULL
AND metadata->>'hostname' != '';

-- Index for getByIds lookups (e.g. findCorrelatedLogs).
-- The primary key is (time, id) which requires knowing `time` to be useful.
-- A standalone index on id lets WHERE id = ANY(...) resolve without chunk scans.
CREATE INDEX IF NOT EXISTS idx_logs_id
ON logs (id);
2 changes: 1 addition & 1 deletion packages/backend/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@logtide/backend",
"version": "0.8.3",
"version": "0.8.4",
"private": true,
"description": "LogTide Backend API",
"type": "module",
Expand Down
54 changes: 17 additions & 37 deletions packages/backend/src/modules/correlation/routes.ts
Original file line number Diff line number Diff line change
Expand Up @@ -331,45 +331,25 @@ export default async function correlationRoutes(fastify: FastifyInstance) {
});
}

// Fetch logs by IDs across accessible projects (reservoir: works with any engine)
const allFoundLogs: Array<{ id: string; projectId: string }> = [];
for (const pid of searchProjectIds) {
const found = await reservoir.getByIds({ ids: logIds, projectId: pid });
for (const log of found) {
allFoundLogs.push({ id: log.id, projectId: log.projectId });
}
// Stop once we've found all requested logs
if (allFoundLogs.length >= logIds.length) break;
}

if (allFoundLogs.length === 0) {
return reply.send({
success: true,
data: { identifiers: {} },
});
}
// Query log_identifiers directly — log_identifiers is always in PostgreSQL and
// already contains log_id, project_id, and identifier data. No need to hit the
// storage engine (ClickHouse/TimescaleDB/MongoDB) at all.
// The project_id IN searchProjectIds clause enforces access control.
const rows = await db
.selectFrom('log_identifiers')
.select(['log_id', 'identifier_type', 'identifier_value', 'source_field'])
.where('log_id', 'in', logIds)
.where('project_id', 'in', searchProjectIds)
.execute();

// Verify project access for the first log's project
const firstProjectId = allFoundLogs[0].projectId || projectId || '';
const hasAccess = await verifyProjectAccess(request as any, firstProjectId);
if (!hasAccess) {
return reply.status(403).send({
success: false,
error: 'Access denied to these logs',
});
}

// Only return identifiers for logs in accessible projects
const accessibleLogIds = allFoundLogs
.filter((log) => log.projectId === firstProjectId)
.map((log) => log.id);

const identifiersMap = await correlationService.getLogIdentifiersBatch(accessibleLogIds);

// Convert Map to plain object for JSON serialization
const identifiers: Record<string, Array<{ type: string; value: string; sourceField: string }>> = {};
for (const [logId, matches] of identifiersMap) {
identifiers[logId] = matches;
for (const row of rows) {
if (!identifiers[row.log_id]) identifiers[row.log_id] = [];
identifiers[row.log_id].push({
type: row.identifier_type,
value: row.identifier_value,
sourceField: row.source_field,
});
}

return reply.send({
Expand Down
13 changes: 9 additions & 4 deletions packages/backend/src/modules/query/service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -450,16 +450,20 @@ export class QueryService {
* Hostnames are extracted from metadata.hostname field.
* Cached for performance - used for filter dropdowns.
*
* PERFORMANCE: Defaults to last 6 hours. Metadata extraction is expensive
* on large windows. With 5-minute cache, most requests are served from cache.
* PERFORMANCE: Window is capped at 6 hours regardless of what the caller passes.
* JSONB extraction is expensive on large datasets — 6h ≈ 350ms, 24h+ ≈ 8s+.
* For a filter dropdown this is an acceptable trade-off: hostnames are stable.
* With 5-minute cache, most requests are served from cache after the first hit.
*/
async getDistinctHostnames(
projectId: string | string[],
from?: Date,
to?: Date
): Promise<string[]> {
// PERFORMANCE: Default to last 6 hours
const effectiveFrom = from || new Date(Date.now() - 6 * 60 * 60 * 1000);
// PERFORMANCE: Cap window to 6h max. If the caller requests a longer window
// (e.g. 24h), silently clamp it — JSONB distinct over large ranges is O(rows).
const sixHoursAgo = new Date(Date.now() - 6 * 60 * 60 * 1000);
const effectiveFrom = !from || from < sixHoursAgo ? sixHoursAgo : from;

// Try cache first
const cacheKey = CacheManager.statsKey(
Expand All @@ -482,6 +486,7 @@ export class QueryService {
projectId,
from: effectiveFrom,
to: to ?? new Date(),
limit: 500,
});

const hostnames = result.values;
Expand Down
18 changes: 13 additions & 5 deletions packages/backend/src/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,8 @@ export async function build(opts = {}) {
// Determine HTTP status code:
// 1. error.statusCode set by Fastify (validation, rate limit) or custom parsers
// 2. Fastify validation errors have a .validation property → 400
// 3. Default → 500
// 3. ZodError (name === 'ZodError') → 400
// 4. Default → 500
let statusCode = typeof (error as any).statusCode === 'number'
? (error as any).statusCode
: undefined;
Expand All @@ -76,11 +77,18 @@ export async function build(opts = {}) {
statusCode = 400;
}

if (!statusCode && (error as any).name === 'ZodError') {
statusCode = 400;
}

if (statusCode && statusCode >= 400 && statusCode < 500) {
reply.code(statusCode).send({
statusCode,
error: errMessage,
});
const body: Record<string, unknown> = { statusCode, error: errMessage };
if ((error as any).validation) {
body.details = (error as any).validation;
} else if ((error as any).name === 'ZodError' && Array.isArray((error as any).errors)) {
body.details = (error as any).errors;
}
reply.code(statusCode).send(body);
return;
}

Expand Down
2 changes: 1 addition & 1 deletion packages/backend/src/utils/internal-logger.ts
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ export async function initializeInternalLogging(): Promise<string | null> {
dsn,
service: process.env.SERVICE_NAME || 'logtide-backend',
environment: process.env.NODE_ENV || 'development',
release: process.env.npm_package_version || '0.8.3',
release: process.env.npm_package_version || '0.8.4',
batchSize: 5, // Smaller batch for internal logs to see them faster
flushInterval: 5000,
maxBufferSize: 1000,
Expand Down
2 changes: 1 addition & 1 deletion packages/frontend/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@logtide/frontend",
"version": "0.8.3",
"version": "0.8.4",
"private": true,
"description": "LogTide Frontend Dashboard",
"type": "module",
Expand Down
25 changes: 25 additions & 0 deletions packages/frontend/src/app.css
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,31 @@
}
}

/* Skeleton shimmer animation */
@keyframes shimmer {
0% { background-position: -200% 0; }
100% { background-position: 200% 0; }
}

.skeleton {
background-color: hsl(var(--muted));
background-image: linear-gradient(
90deg,
hsl(var(--muted)) 0%,
hsl(var(--background) / 0.8) 50%,
hsl(var(--muted)) 100%
);
background-size: 200% 100%;
animation: shimmer 1.8s ease-in-out infinite;
}

@media (prefers-reduced-motion: reduce) {
.skeleton {
animation: none;
background-image: none;
}
}

/* High contrast mode support */
@media (prefers-contrast: more) {
:root {
Expand Down
2 changes: 1 addition & 1 deletion packages/frontend/src/hooks.client.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ if (dsn) {
dsn,
service: 'logtide-frontend-client',
environment: env.PUBLIC_NODE_ENV || 'production',
release: env.PUBLIC_APP_VERSION || '0.8.3',
release: env.PUBLIC_APP_VERSION || '0.8.4',
debug: env.PUBLIC_NODE_ENV === 'development',
browser: {
// Core Web Vitals (LCP, INP, CLS, TTFB)
Expand Down
2 changes: 1 addition & 1 deletion packages/frontend/src/hooks.server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ export const handle = dsn
dsn,
service: 'logtide-frontend',
environment: privateEnv?.NODE_ENV || 'production',
release: process.env.npm_package_version || '0.8.3', }) as unknown as Handle,
release: process.env.npm_package_version || '0.8.4', }) as unknown as Handle,
requestLogHandle,
configHandle
)
Expand Down
2 changes: 1 addition & 1 deletion packages/frontend/src/lib/components/Footer.svelte
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<script lang="ts">
import Github from "@lucide/svelte/icons/github";

const version = "Alpha v0.8.3";
const version = "Alpha v0.8.4";
const currentYear = new Date().getFullYear();
const githubUrl = "https://github.com/logtide-dev/logtide";
</script>
Expand Down
5 changes: 5 additions & 0 deletions packages/frontend/src/lib/components/ui/skeleton/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import Root from './skeleton.svelte';
import SkeletonTable from './skeleton-table.svelte';
import TableLoadingOverlay from './table-loading-overlay.svelte';

export { Root, Root as Skeleton, SkeletonTable, TableLoadingOverlay };
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
<script lang="ts">
import Skeleton from './skeleton.svelte';

interface Props {
rows?: number;
columns?: number;
columnWidths?: string[];
class?: string;
className?: string;
}

let {
rows = 5,
columns = 4,
columnWidths = [],
class: classProp = '',
className = '',
}: Props = $props();

// Vary widths naturally so rows don't look identical
const defaultWidths = ['70%', '55%', '65%', '45%', '60%', '50%', '40%'];

function getCellWidth(colIndex: number): string {
if (columnWidths[colIndex]) return columnWidths[colIndex];
return defaultWidths[colIndex % defaultWidths.length];
}

// Vary height slightly per row for a natural look
function getRowVariant(rowIndex: number): string {
return rowIndex % 3 === 0 ? 'h-4' : rowIndex % 3 === 1 ? 'h-3.5' : 'h-4';
}
</script>

<div class="rounded-md border overflow-hidden {classProp} {className}">
<table class="w-full">
<thead>
<tr class="border-b bg-muted/30">
{#each Array(columns) as _, i}
<th class="px-4 py-3 text-left">
<Skeleton class="h-3 w-20" />
</th>
{/each}
</tr>
</thead>
<tbody>
{#each Array(rows) as _, rowIndex}
<tr class="border-b last:border-0">
{#each { length: columns } as _, colIndex}
<td class="px-4 py-3">
<Skeleton
class={getRowVariant(rowIndex)}
width={getCellWidth(colIndex)}
/>
</td>
{/each}
</tr>
{/each}
</tbody>
</table>
</div>
Loading
Loading