To post my work online as easily as possible, I wanted to create some Houdini nodes that would let me to render directly to Twitter and Instagram.
I’ve created two nodes that live inside the
/out ROP context, that allow me to daisy-chain them behind a Redshift or a Mantra node. When my renders finish, they automatically pick up the rendered image, and upload them to the cloud for me.
Apart from it’s basic functionality the nodes keep track of the
#hashtags for me, format the text specifically for twitter or instagram and they encode all emoji’s according to their
Finally the nodes automatically prevent me from making silly mistakes like rendering to the openEXR filetype, which is not supported by a lot of webservices yet, or by setting an incorrect gamma for web images.
Houdini and Web APIs
For anyone who would like to build the same, I’m using the pycloudinary package to communicate with Cloudinary’s API and upload my renders to their cloud. From there I’m using the buffer-python package to queue them up and post online.
Redshift ROP Chains
I learned the hard way that by default Redshift does not render ROP chains in perfect sequence. By default your downstream nodes will get an incorrect renderFinish cue.
In my case, my nodes would try to upload the render before it was finished. This resulted in either a fail or worse it would upload a previously rendered image that was still on disk.
To solve this you just need to make sure your code unchecks the Non-blocking current frame rendering checkbox in the Redshift ROP.
Render and Forget
In my previous post I outlined how I created assets to speed up my concept to render workflow. Adding these new nodes into the mix has taken the friction out of the final part of the process.
Now I can set up final renders and forget about them, walk out of the house and everything else will be taken care of