Introducing Bezi AI

Julian Park

March 28, 2024

Today marks a significant milestone in the world of 3D design: the ability to ideate at the speed of thought with an infinite asset library.

Assets are critical to the design workflow. No chef can cook a delicious meal without good ingredients! After many conversations with both designers working in the 3D software industry and designers who want to but aren’t, it’s clear that there’s a shared problem on how long it takes to get started. Even if you know where to start, it often takes a long time to create the assets — learning modeling tools over countless tutorials and meticulously sculpting each vertex.

This is in stark contrast to the current 2D design workflow, which enjoys an abundance of images that can easily be dragged onto a design tool. A quick Google image search can get you pretty far with ingredients for the ideation phase. Only if there was a way for 3D designers to operate at this level of creative freedom…

Bezi AI is the fastest way to turn ideas into reality. From a small idea in your mind to a fully interactive 3D experience… in seconds.

Speed

Bezi AI offers a way to storyboard at the speed of grayboxing but with the quality of a prototype.

Consider 3D models on a spectrum of visual fidelity: the simplest gray box, then the low-poly object with basic color, then the final production-ready asset. Starting with simple boxes is one thing, but to extend beyond that, most 3D designers are faced with two options: either model it or buy it.

As mentioned, modeling each asset usually takes a long time, even if you know how to sculpt and texture in 3D. Buying an asset might be faster, but it comes with its own tradeoffs. For example, it’s unlikely you find the exact asset you need in the desired aesthetic style and consistency. Assets on 3D marketplaces also tend to have heavy geometry and textures that make storage management unwieldy. Downloading them into a version control system that must be kept in sync with the game engine often exacerbates the overhead.

With Bezi AI, you simply type the asset you want and drag it into the scene. That’s it. Just a text prompt describing an object, e.g. “medieval chest of gold coins”, and in a matter of seconds, you have several options of medieval chests to choose from. At that point, you can directly drag it into the scene, replace an existing scene object with this generated asset, or further upscale the asset to improve its quality.

You’ll occasionally notice that certain generated assets do not look the way you expected. While it reflects the nature of today’s foundation models (currently powered by Luma), it also aligns nicely with Bezi AI’s focus on an efficient design process rather than generating assets for production usage.

Composition

When using Bezi AI, it’s important to consider the appropriate level of detail for your scene. This allows you to control the hierarchy of the entire experience, which is important when interacting with individual parts of the generated assets.

For example, let’s say you want to use a panda character for a game, which requires the panda to hold different objects at different times. One way to do that would be to generate a “panda holding flowers” and afterwards generate a “panda holding a shovel”.

While that could work, it becomes difficult to add interactive states to each item that the panda holds since the flowers and the shovel are attached to the panda. Instead, generating the panda first and independently generating the other objects leads to more useful interactivity.

In addition, composing discrete objects would look better overall since the generative algorithm allocates a similar level of detail for each object. So an independently generated shovel would have higher visual fidelity than a shovel attached to a panda.

Another benefit of atomizing assets is the consistency of objects across scenes. This has been a commonly discussed issue in the generative AI community given the difficulty of consistently preserving entities between scenes. With Bezi AI, designers can reuse the same asset for different purposes where applicable.

After generating each asset, you can simply move the objects to their intended position, rotation, and scale in the Bezi editor. By combining individual elements into a larger concept, you’re able to control each object as you wish, animating them separately, and swapping them with other assets if needed.

You can also extrapolate this concept of composition to a larger 3D environment. Instead of generating a “futuristic city with skyscrapers”, it might be more useful to generate a number of skyscrapers to populate the city and add interactive states to specific buildings.

Source of Truth

A whiteboard wouldn’t be as useful if it wasn’t a shared source of truth between people. The content on the board is clearly important, but what’s more important is that everyone in the room is looking at the same thing.

Similarly, the 3D ideation phase becomes much more valuable when it’s a shared source of truth with your entire team, not just the technical 3D experts. The value of a design file is amplified when it’s collaborative with the rest of the team.

Upon generating and composing scenes, designers can easily invite their teammates to the same Bezi file with a web link. This enables the whole team to contribute to the experience, add comments, and iterate together. Say goodbye to the days of version control issues!


In conclusion, Bezi AI is all about making 3D design easier and faster than ever before. It’s starting with a text-to-3D feature today, but will expand to other intelligent features over time to boost your productivity. Bezi is committed to freeing designers of the technical challenges of the 3D software industry, allowing them to focus on what humanity has been doing best for millennia: storytelling.

Today marks a significant milestone in the world of 3D design: the ability to ideate at the speed of thought with an infinite asset library.

Assets are critical to the design workflow. No chef can cook a delicious meal without good ingredients! After many conversations with both designers working in the 3D software industry and designers who want to but aren’t, it’s clear that there’s a shared problem on how long it takes to get started. Even if you know where to start, it often takes a long time to create the assets — learning modeling tools over countless tutorials and meticulously sculpting each vertex.

This is in stark contrast to the current 2D design workflow, which enjoys an abundance of images that can easily be dragged onto a design tool. A quick Google image search can get you pretty far with ingredients for the ideation phase. Only if there was a way for 3D designers to operate at this level of creative freedom…

Bezi AI is the fastest way to turn ideas into reality. From a small idea in your mind to a fully interactive 3D experience… in seconds.

Speed

Bezi AI offers a way to storyboard at the speed of grayboxing but with the quality of a prototype.

Consider 3D models on a spectrum of visual fidelity: the simplest gray box, then the low-poly object with basic color, then the final production-ready asset. Starting with simple boxes is one thing, but to extend beyond that, most 3D designers are faced with two options: either model it or buy it.

As mentioned, modeling each asset usually takes a long time, even if you know how to sculpt and texture in 3D. Buying an asset might be faster, but it comes with its own tradeoffs. For example, it’s unlikely you find the exact asset you need in the desired aesthetic style and consistency. Assets on 3D marketplaces also tend to have heavy geometry and textures that make storage management unwieldy. Downloading them into a version control system that must be kept in sync with the game engine often exacerbates the overhead.

With Bezi AI, you simply type the asset you want and drag it into the scene. That’s it. Just a text prompt describing an object, e.g. “medieval chest of gold coins”, and in a matter of seconds, you have several options of medieval chests to choose from. At that point, you can directly drag it into the scene, replace an existing scene object with this generated asset, or further upscale the asset to improve its quality.

You’ll occasionally notice that certain generated assets do not look the way you expected. While it reflects the nature of today’s foundation models (currently powered by Luma), it also aligns nicely with Bezi AI’s focus on an efficient design process rather than generating assets for production usage.

Composition

When using Bezi AI, it’s important to consider the appropriate level of detail for your scene. This allows you to control the hierarchy of the entire experience, which is important when interacting with individual parts of the generated assets.

For example, let’s say you want to use a panda character for a game, which requires the panda to hold different objects at different times. One way to do that would be to generate a “panda holding flowers” and afterwards generate a “panda holding a shovel”.

While that could work, it becomes difficult to add interactive states to each item that the panda holds since the flowers and the shovel are attached to the panda. Instead, generating the panda first and independently generating the other objects leads to more useful interactivity.

In addition, composing discrete objects would look better overall since the generative algorithm allocates a similar level of detail for each object. So an independently generated shovel would have higher visual fidelity than a shovel attached to a panda.

Another benefit of atomizing assets is the consistency of objects across scenes. This has been a commonly discussed issue in the generative AI community given the difficulty of consistently preserving entities between scenes. With Bezi AI, designers can reuse the same asset for different purposes where applicable.

After generating each asset, you can simply move the objects to their intended position, rotation, and scale in the Bezi editor. By combining individual elements into a larger concept, you’re able to control each object as you wish, animating them separately, and swapping them with other assets if needed.

You can also extrapolate this concept of composition to a larger 3D environment. Instead of generating a “futuristic city with skyscrapers”, it might be more useful to generate a number of skyscrapers to populate the city and add interactive states to specific buildings.

Source of Truth

A whiteboard wouldn’t be as useful if it wasn’t a shared source of truth between people. The content on the board is clearly important, but what’s more important is that everyone in the room is looking at the same thing.

Similarly, the 3D ideation phase becomes much more valuable when it’s a shared source of truth with your entire team, not just the technical 3D experts. The value of a design file is amplified when it’s collaborative with the rest of the team.

Upon generating and composing scenes, designers can easily invite their teammates to the same Bezi file with a web link. This enables the whole team to contribute to the experience, add comments, and iterate together. Say goodbye to the days of version control issues!


In conclusion, Bezi AI is all about making 3D design easier and faster than ever before. It’s starting with a text-to-3D feature today, but will expand to other intelligent features over time to boost your productivity. Bezi is committed to freeing designers of the technical challenges of the 3D software industry, allowing them to focus on what humanity has been doing best for millennia: storytelling.