Our system combines a robotic platform (we call the first one Maurice) with an AI agent that understands the environment, plans actions, and executes them using skills you've taught it or programmed within our SDK.
If you’ve been building AI agents powered by LLMs before, and in particular Claude Computer use, this is how we intend the experience of building on it to be, but acting on the real world!
You can see Maurice serving a glass here (https://bit.ly/innate-hn-vid-serving). Here is another example (https://bit.ly/innate-hn-vid-officer) in which it was given a digital ability (through the SDK) to send a notification to your phone, to use when it sees someone in the house. In both these cases, the only work you have to do is spend 30mn per physical skill to collect data to train the arm, and a couple minutes to write a system prompt.
You can read more about how it works, about the paradigm we’re creating and find our Discord in our documentation (https://docs.innate.bot). We’ll be open-sourcing parts of the system there soon.
We want to lower the barrier to entry to robotics. Programming robots is usually complicated, time-consuming, and limited to experts even with AI helping you write code. We think it should be easier.
We’re coming from an AI for robotics and HCI background as researchers at Stanford, and we’ve worked on multiple hardware + agentic AI projects this past year, but this one was clearly the most surprising one.
The first time we put GPT-4 in a body - after a couple tweaks - we were surprised at how well it worked. The robot started moving around, figuring out when to use a tiny gripper, and we had only written 40 lines of python on a tiny RC car with an arm. We decided to combine that with recent advancements in robot imitation learning such as ALOHA to make the arm quickly teachable to do any task.
We think it should be simple to teach robots to do tasks for us. AI agents offer a completely new paradigm for this, easy enough to help many non-roboticists start in the field, but still expandable enough to make a robot able to do very complex tasks.
The part of it that excites us most is that for every builder teaching their robot to perform a task, every other robot learns faster and better. We believe that by spreading our platforms as much as possible, we could crowdsource massive and diverse datasets to make robotics foundations models that everyone contributes to.
Under the hood, our brain (running in the cloud) uses 9 different models and runs in the cloud. A YOLO, a SAM, and 7 VLMs, from OpenAI, Google, Anthropic, and most importantly a couple Llamas running on groq to make the system think and act faster. Each model has a responsibility. Together, they act as if it was only one model with ability to navigate, talk, memorize and activate skills. As a bonus, since these models keep getting better and smaller, every new release gets our robots smarter and faster!
Our first robot Maurice is 25cm-high, has a 5DoF arm, a Jetson Orin Nano onboard, and comes equipped with our software installed and a mobile app to control it. Our first batch of users wants to teach it to clean floors, tidy up after kids, wake them up in the morning, play with them, or be a professional assistant connected to emails and socials. You can go wild quickly!
We’re making a small batch available for HackerNews at $2,000 each for early builders who want to experiment, with $50 free of agent per month for a year. You can book one on our website with a (refundable) deposit if you’re in the US. These units will start shipping in March - the first 10 were booked already for February.
We’d love your thoughts, experiences, and critiques. If you have ideas on what you’d use a home robot for, or feedback on how to make these systems more accessible, we’re hanging around for the coming hours in the comments. Let us know what you think!
- Congrats on the launch.
Have you thought about assistive technology/accessibility tasks as well? Would love to use such a device to control the touch screens on inaccessible coffee machines at my clients offices for example that I can't operate without sight. I'm sure there are way more examples of such things.
Throwing complex robots at inaccessible devices is not the proper solution, but by far the most quick and practical one. Not in the US, so not even able to buy one and I'm also hesitant to buy something that is totally bricked when the company/cloud goes under.
- Love this. Thoroughly interested in jumping on this wagon.
Just throwing an idea out there: Instead of re-inventing the proverbial wheeled base, I wonder if it would make sense to build on top of some established wheel bases, like the turtlebot (https://www.turtlebot.com/). And then go ahead and build an array of manipulator/grasper options on top.
It’ll get you an established supply chain (including of replacement parts) and let you focus in on your specialty here: the reconfigurable agentic behavior stack.
It would be a no-brainer for me to jump on this and start building if the base was a turtlebot!
- Is this stable and fast enough that I could hand it a camera and train it to point that camera at myself as I move around playing a musical instrument to create a music video?
- this is very cool, I've been playing around in the same space with a simple tracked robot and a 2dof gripper. you seem to be quite a bit ahead of me in functionality.
I'm using PaliGemma2 and MobileSAM for the vision part and Gemma for the thinking part. I'm hoping to stick with weights-available models as it's just a toy project.
for what it's worth this contraption cost under £200, but I'm using a desktop and a 3090 as the brains.
- very cool! Both demos were very entertaining and charming. Could you share other behaviors/tasks that you foresee Maurice being able to tackle? I personally have trouble brainstorming tasks that I would find useful.
- Hi! A couple of questions, please: 1. Will you ship to the UK? 2. How far into the tech stack will we be able to see and tweak? I’d love to use this to really understand how this type of tech works. Thanks!
- TaskType: Action with arm: pick_up_glass
Is that preprogrammed, or is the LLM doing that?
- Congratulations on the launch.
This is pretty cool. I really like the simplicity.
While I was doing my PhD in HRI (~7 years ago), I played around with robots (mostly NAOs) to navigate and manipulate the real world. It was cool but really cumbersome.
I wish you all the best. Great UX is the key.
deleted
- Looking really cool. The preorder stripe page says $300 but it says deposit. What’s the actual price when available?
- Congrats on the launch! Looks fantastic.
I've been thinking about exactly this kind of architecture (vision-language models + physical robot ==> performing tasks)
I'd love to tinker with one (or more) of these 'Bots.
Question: Is the entire inference in the cloud? or on the Bot's hardware?
- Congrats on the launch.
How much did you learn from the lessons of other contemporary robotics frameworks that are out there? Do you envision focusing in on particular types of tasks later, or is it still uncertain how your robot design will evolve as the dataset grows?
- Really cool!
Had a couple questions:
- how far does the $50/mo get you?
- what's the battery life like/going to be like?
- do yall allow people or plan to allow people to swap out (some) of the models or orchestrate them yourself, at least incrementally? say I want to fine-tune my own maurice personality
- Very cool! Are you planning on building out (or offering direction on) 'base machines'? Would love to see a plug and play hardware infrastructure, where plopping in a new brain/etc for development purposes is a thing.
- If things go south, the world must rely on Will Smith to save us from the machines. He's the only one with the appropriate training.
- I am happy for your launch and wish you a good deal of luck.
Personally I am really disappointed with the idea of requiring a subscription for a home robot, though. When I was younger I envisioned a home robot to have a brain inside itself, or at least in the home. Alas, it seems the world is developing in another way.
- The site needs at least an email address for order feedback - you've got customers now :)
- ‘F24’? Was this a typo? I’ve only ever seen W for Winter and S for summer :)
- Why not show how much the video is sped up in the demo on your landing page?
- Please, create a robot arm which I would instruct for iron and fold my clothes.
- small nit, but why is the bot stopping so far from you? maybe it's the camera angle, but it looks like it wouldn't get within 10ft of either of you.
maybe for safety?
- Can I bring my own hardware?
- what happens if one tries to teach the robot to do criminal acts?
- Interesting
- sick
- I'm actually shocked YC invested in this product. Who's buying a robot that they then need to sit down and program - only to do things like ... what? Water a plant? (with the caveat that the glass must be pre-filled with water and sitting in a spot for the robot to grab) I cannot think of this being able to do anything remotely useful and having the juice be worth the squeeze. Hate to be such a grump about it but really what are the real-world use cases? The website does absolutely nothing to show me why I might want or need this product. It's a scratch in search of an itch.