How far can we push the browser as a data engine — not just for visualizations, but for curating and querying large datasets? Do we need traditional backend architectures?
I wanted to see what happens when we treat the browser like part of the data stack, using pure JavaScript to load, slice, and explore datasets interactively. That experiment led to a small set of open-source tools — Hyparquet and HighTable. They’re designed to test the limits of browser-native data processing to see where the browser stops being a thin client and starts acting like a real data engine.
Curious what others think about the future of browser-first data tools:
- Where do you see the practical limits for client-side data processing? - What would make browser-based architectures a viable alternative to traditional data stacks?