Case studies

KAS scraper - automated public listing monitoring

Continuous monitoring of public registries without manual checking overhead.

Business value: Faster response to relevant listing changes and lower operational load through event-based alerts.

Outcome: 24/7 service that detects new records and sends non-duplicated notifications.

Tech stack: Python, Playwright, SQLite, systemd, HTTP scraping, Discord webhooks, CLI, daemon services

Business problem

Manual checks were slow and expensive, while changing website behavior increased the risk of missing critical records.

Approach and solution

I built a modular daemon service that crawls sources on schedule, detects changes, and sends alerts only when new business-relevant data appears.

Delivery scope

  • Time-window scheduling with multi-frequency polling strategy.
  • Deduplication and idempotent notification pipeline with persistent state.
  • Playwright fallback for scenarios where pure HTTP scraping was not reliable.
  • systemd deployment with auto-restart, logging, and diagnostic runtime modes.

Business impact

  • Relevant updates reach stakeholders close to real time.
  • No duplicate alerts and reduced manual operational effort.
  • Stable monitoring foundation ready for additional data sources.
Back to case studies