Generative AI has become an important part of the artistic workflow of game development. However, as detailed in a previous post, text to 3D conversion lags behind 2D in terms of practicality. This is starting to change. Today, we’ll revisit a practical workflow for 3D asset generation and take a step-by-step look at how to integrate Generative AI into your PS1-style 3D workflow.

Why PS1 style? Because it’s much more tolerant of low fidelity in your current text-to-3D models, and allows you to go from text to usable 3D assets with as little effort as possible.
Prerequisites
This tutorial assumes basic knowledge of Blender and 3D concepts such as materials and UV mapping.
Step 1: Generate the 3D model
To get started, visit our Shap-E Hugface space here or below. In this space, we’ll be using the open source Shap-E model, a recent popular model from OpenAI that generates 3D models from text.
Type “Dlapidated Shack” at the prompt and click Generate. Once you are satisfied with your model, download it to proceed to the next step.

Step 2: Import and decimate the model
Next, open Blender (version 3.1 or later). Go to (File)->(Import)->(GLTF 2.0) and import the downloaded file. You may notice that this model has many more polygons than is recommended for many practical applications such as games.

To reduce the polygon count, select the model, go to (Modifiers), and select the (Thin) modifier. Adjust the ratio to a lower number (i.e. 0.02). This is probably not very good. However, for this tutorial, we will accept lower fidelity.
Step 3: Install Dream Textures
To add textures to your model, use Dream Textures, a stable diffuse texture generator for Blender. Download and install the add-on following the instructions from the official repository.

Once installed and enabled, open the add-on’s settings. Find and download the texture diffusion model.
Step 4: Generate the texture
Let’s generate a custom texture. Open the UV editor in Blender and press “N” to open the properties menu. Click the Dream tab and select the texture diffusion model. Set the prompt to “Texture” and Seamless to “Both.” This makes the generated image a seamless texture.
Under “Subject” enter the texture you want, such as “Wooden Wall,” and click “Generate.” Once you are satisfied with the results, give it a name and save it.

To apply a texture, select your model and go to “Materials”. Add a new material, click the dot under “Base Color” and select “Image Texture”. Finally, select the newly generated texture.
Step 5: UV mapping
Time for UV mapping to wrap a 2D texture around a 3D model. Select the model and press “Tab” to enter edit mode. Then press “U” to unwrap the model and select “Smart UV Project”.
To preview your texture model, switch to Render view (hold Z and select Render). You can scale up UV maps and tile them seamlessly on your model. We’re going for a retro PS1 style, so don’t make it too fancy.

Step 6: Export the model
Once you are satisfied with your model, export it. Go to (File)->(Export)->(FBX) and voila! There are 3D assets available.
Step 7: Import with Unity
Finally, let’s take a look at the actual model. Import into Unity or your game engine of choice. Customized with custom vertex light shading, no shadows, lots of fog, and glitchy post-processing to recreate the nostalgic PS1 aesthetic. Learn more about how to recreate the PS1 aesthetic.
The result is a lo-fi, textured 3D model in a virtual environment.

conclusion
That’s a summary of how to create working 3D assets using Generative AI workflows. Although the results are low-fidelity, the potential is huge and with enough effort, it’s possible to use this method to generate infinite worlds in a lo-fi style. And as these models improve, it may be possible to transfer these techniques to high-fidelity or realistic styles.
If you’ve followed along and created your own 3D assets, we’d love to see you! If you want to share them, have questions or join the community, join the Hugging Face Discord.

