It’s always been a dream of mine to create a full equirectangular 360 Panorama out of AI art generators. I have only really been able to do it once in the past by taking one of my 360 drone panoramas that I shot and adding a style transfer to the panorama. This turned out ok but with the gpu I had at the time I was limited to a very low resolution so I stopped chasing that method.
Ever since I have tried almost every new AI art generator that I could find to see if someone cracked the code for 360 panorama generation. So far there has been no progress on that part. I also have had a few at length discussions with the Disco Diffusion (DD) team on ways to potentially render this out but no movement on that part and its not really a high priority to the developers who are dedicating their free time to updating and maintaining DD, which I completely understand and their also may be some very big technical hurdles to try to overcome to accomplish this.
So I tried what some others have done by using an init image that’s a guided 360 equirectangular image along with some keyword magic to create something that would try to mimic anything close to a 360 panorama.
Hence this looks like crap, doesn’t have proper edge awareness on the left and right and at the north and south poles it’s just not properly stitched. Basically this was a fail.
So I put it aside and went on to working more on trippy psychedelic animations using DD with Turbo until I saw a user post on Discord a 360 DD Pano that blew my mind away. Osmodin was this user so I reached out to him asking him how this was done (btw all the credit goes to this user).
After a bit of time and discussion on how he did this the light bulb went off on my head and I said of course this should work. However let me backup and say I’m a professional photographer that specializes in 360 panorama from a drone or the ground, so the tools he was mentioning I had been using for years to create panoramas with already.
I recently wrapped up shooting, editing and building a virtual tour for Fly Ranch which is owned by the Burning Man organization. You can see the full version here (this is best viewed on a desktop)
So how were these created? Well that’s the million dollar questions I have been asking and many others over the past and it’s not a simple one to answer.
The idea is to generate with a 3D app such as Blender each frame of a pano at the various degrees and rotations or use existing images shot for a 360 panorama before its stitched.
Here is an example of a camera rig I built in Blender that uses a camera at each angle defined with enough overlap (70%) between images.
Tools you will need
You first use blender or any other 3D application to create a custom camera rig that mimics the process of taking 360 photos from a drone or DSLR. You take an image every x amount of degrees at the center, up around 135 degrees, down 45 degrees and one up and down. Do not try to use fisheye to do less images since the stitching won’t work. In Blender I used a 36mm sensor size with a 28mm focal length.
Once you have your custom camera rig, create a custom shader on a sphere that has some sort of fractal or Perlin noise. This will help DD try to create something out of your prompt.
After you have rendered out each image you will need to bring them into PTGui and properly align them. If you have the proper amount of overlap between cameras, your sensor size and focal length it shouldn’t have a problem stitching them together.
When you have your alignment you can save the position of each aligned image as a template. This is important because after you render out each image in DD and bring them back you will not be able to properly stitch your images so this helps it keep the alignment.
Take each image rendered out from Blender and use it as an init image with a decent amount of skip time steps set. Ideally you want to use around 50% of your iterations. So if you have 250 iterations try a skip time step’s of 125. I played around with this for a bit and found that I liked 90 for my examples. Just do a few tests on a single image to see what it looks like. You don’t want it to be to0 far off or the stitching will not look good.
Once all your images are processed though DD (try to keep the same aspect size of your images) bring them back into PTGui load the template and then you should have a 360 panorama you can export out with the style of DD thrown onto each image.
This is a very simple tool that injects the proper metadata into your equirectangular projection 2:1 ratio panorama. Once done this can be uploaded to Meta (Facebook) and viewed as a 360 panorama.
I have custom software that I use to build virtual tours that was used.
360 Disco Diffusion Panorams Virtual Tour
Click on the above image to view the panoramas. This is best viewed on a big desktop monitor. Also clicking on the bottom right icon displays additional panoramas or Click the link to view this in another tab.
Below is a quick video I did of the process. I’t wont cover every single step and setting but should give you a good idea of the process that was used.
While PTGui isn’t free you can use their free version that puts water marks all over the panorama as a test. The only alternative free stitching software I found is Hugin, but don’t ask me how to use it or if it will even do the steps outlined in this article or video.
If you happen shoot your own 360 panoramas you can use those as init images in DD and stitch them together skipping the full Blender Process.
Here is an example of a 360 panorama shot with my drone near Eagle Rock
This is what it looked like after processing though Disco Diffusion and stitching it back together. You can see this as a full 360 in the virtual tour above.
I hope this helps and if you happen to create a 360 pano and would like it showcased on this site let me know. You can find me on Discord as Gateway or feel free to leave a comment below or use our contact form.
If you like this content please use the share buttons below!
Also again bit thanks to Osmodin for bringing this solution to my attention!