What it does now
Text → image and reference image → image (image-to-image).
Preset aspect ratios (1:1, 3:4, 9:16) to avoid post-crop.
No account required for a quick trial (daily rate limit to prevent abuse).
A minimal, guided UI: only the knobs most people need to ship a result.
How it’s built / trade-offs
Next.js on Cloudflare Pages (fast cold starts, global CDN).
Supabase for simple per-day quotas and lightweight logs.
Reference handling via https image URL to keep the front end lean.
Basic file checks; no batch queue yet to keep infra simple.
What’s intentionally different
Optimized for reference-driven results and speed-to-first-image.
Fewer choices by design—aiming for “good enough, quickly,” not a studio console.
Known limits / near roadmap
Character/identity consistency over long series isn’t robust yet.
No workspace/history or batch generation (planned).
Upcoming: batch mode, visual style/lighting presets, and shareable recipe links.
I’d love feedback on
Reliable input rules for reference images (size/crop/compression).
The minimum set of parameters you still want for practical outputs.
Whether recipe export or batch mode should land first.
Demo: https://nano2image.com/
I’ll be in the comments to answer questions and ship fixes quickly.