Deadline 5.1 update

loocas | deadline | Saturday, December 31st, 2011

Deadline 5.1

I just updated to the final release of Deadline 5.1 after beta testing it for a few months and I couldn’t recommend any other render manager more. If you’re still on Deadline 5 or prior, this update is highly recommended! Especially since it boasts a full CPython support, tight Shotgun integration, draft, the ability to run multiple instances on one render node and much more. For more details, read the official announcement.


Deadline’s Power Management helps you save money

loocas | deadline,miscellaneous,software | Tuesday, November 15th, 2011

Deadline headline

I’m, along with Gavin Greenwalt from Straightface, featured in Thinkbox’s study that took a look into the Deadline Power Management feature and how it can help save your studio money in the end.

Go ahead, it’s an interesting read.

Scripting Deadline Event plugins, tutorial

loocas | deadline,Python,technical,videotutorials | Sunday, July 24th, 2011

People have asked me how to actually write a plugin for Deadline that will automatically submit a Nuke scene (or any other, for that matter), so I put together a quick tutorial showing you just that. :)

Get the Flash Player to see this content.

Deadline’s Power Management not waking slaves properly

loocas | deadline,miscellaneous,software,technical | Sunday, May 29th, 2011

I recently had a problem with Deadline 5 and its (awesome) Power Management setup. The issue was that the server that was running the Pulse on wasn’t waking my machines up from their shutdown states (WOL).

The weirdest thing was that I was able to wake those machines up from any of my workstations via the Deadline Monitor app, but Pulse wasn’t able to. So, after speaking to the Thinkbox Software support (which is also top-notch and very helpful, by the way), they recommended me a few network traffic sniffing apps to monitor what is going on on the NICs.

The problem was that the NIC connected to the network was not actually sending the magic packet, so, no machines were, obviously, able to receive it. After a bit of further investigation, I found out that the WOL packet was actually being sent through the secondary NIC on the server, which wasn’t physically connected to the switch (mainly because the server also acts as a DC). So, the simplest solution seemed to disable the secondary NIC in Windows and have the primary NIC take care of the whole business.

This, however, presented a lot of trouble. By disabling the secondary adapter, you completely disable the NIC (in Windows, that is), so, with that you also disable any licenses that are bound to that particular adapter’s MAC! After that I wasn’t able to start Nuke, Mari, or even Deadline Slaves!

So, I had to dig deeper. The answer was Interface Metrics. In the Advanced tab under the IP properties, you can manually override Interface Matrics. See the link for more details, but basically, any lower value has higher priority. In my case, the secondary NIC (not physically connected to the switch), got automatically assigned a higher priority matric (a lower value), than the primary NIC. I manually overrode those and voila!, all traffic was being directed through the primary NIC.

To check what settings you’re at, use this command in the command prompt:

netsh interface ip show address

Hope this helps… :)

Customizing Deadline, example

loocas | deadline,software,technical | Tuesday, April 19th, 2011

Get the Flash Player to see this content.

An example of the limitless possibilities how to customize your Deadline installation in order to make it work in your specific production pipeline.

VRay setup for network rendering

loocas | 3ds Max,deadline,technical,videotutorials | Tuesday, December 7th, 2010

Get the Flash Player to see this content.

A short video tutorial for those interested in setting up their VRay, or any other plugin, for flawless network rendering from a centralized repository.

More power!

loocas | deadline,miscellaneous,technical | Thursday, November 25th, 2010

More Power

That’s right. One more render slave added to the mix. So far, I’ve got full 120GHz of render power at my disposal, which will come in handy in no time as I’ll be sending about 36 jobs on the farm tomorrow. :)

Thus the VRay testing (I’ll have to use VRay for several reasons for the current job).

Also, I’ll write some more on how to make VRay painlessly distributed over your render farm. Stay tuned!

Powered by WordPress | Theme by Roy Tanck