Render to Twitter and Instagram from Houdini
March 12, 2018

To post my work online as easily as possible, I wanted to create some Houdini nodes that would let me to render directly to Twitter and Instagram.

Functionality

I’ve created two nodes that live inside the /out ROP context, that allow me to daisy-chain them behind a Redshift or a Mantra node. When my renders finish, they automatically pick up the rendered image, and upload them to the cloud for me.

Apart from it’s basic functionality the nodes keep track of the #hashtags for me, format the text specifically for twitter or instagram and they encode all emoji’s according to their :short-codes:

Finally the nodes automatically prevent me from making silly mistakes like rendering to the openEXR filetype, which is not supported by a lot of webservices yet, or by setting an incorrect gamma for web images.

Houdini and Web APIs

To make all this magic work, I hacked around with Python for a while and tried different Web API’s to get some working solution. For now I’ve settled on a combination of Cloudinary and Buffer.

For anyone who would like to build the same, I’m using the pycloudinary package to communicate with Cloudinary’s API and upload my renders to their cloud. From there I’m using the buffer-python package to queue them up and post online.

Redshift ROP Chains

I learned the hard way that by default Redshift does not render ROP chains in perfect sequence. By default your downstream nodes will get an incorrect renderFinish cue.

In my case, my nodes would try to upload the render before it was finished. This resulted in either a fail or worse it would upload a previously rendered image that was still on disk.

To solve this you just need to make sure your code unchecks the Non-blocking current frame rendering checkbox in the Redshift ROP.

Render and Forget

In my previous post I outlined how I created assets to speed up my concept to render workflow. Adding these new nodes into the mix has taken the friction out of the final part of the process.

Now I can set up final renders and forget about them, walk out of the house and everything else will be taken care of automatically.

I’m a 3D artist in London, mashing polygons with code. Now that web and CG are converging into VR, I’m moving onto that intersection.

Currently I’m available for work

Follow my journey @pixelprotest_

Speeding up my Substance to Redshift Workflow
To help me go from concept to render as quickly as possible, I created three Houdini Digital Assets to speed up this process.
Thoughts on Modeling in VR and How It Will Fit into My Workflow
Recently I've been experimenting with modeling tools in VR. What immediately stands out is how expressive it is to work with.