I’m still amazed how useful DOS can be, even in 2010! And the main reason I thin it’s the most revolutionary OS of all times is that it actually brought the entire PC industry to regular folks’ hands.
Now, by DOS, I’m actually referring to the simplest form of OS environment, IO.sys, MSDOS.sys and COMMAND.com. That’s all you need in order to communicate with your system. That’s all you need to actually get some work done! Isn’t it amazing?
Obviously, Windows and MacOS heavily extended the OS functionality and brought something else in the game. But that’s just evolution. However, DOS on the other hand was truly revolutionary. I may be skipping some other important OS attempts, but DOS was the first OS I ever used as a little kid. I remember when I was about 8 or 9 years old, my grandpa had a, at the time, high-end 286 computer, 512KB of RAM, some 30MB HDD, it was a beast! and it was running DOS. I learnt a few basic commands, such as CD, MD, CLS, COPY etc… just to be able to run Prince of Persia or Wolf3D
The entire studio ethernet ran on a standard 100Mbps line, 100BASE-T Fast Ethernet, which used to be enough. However, with the addition of a render farm and a fast centralized storage, I needed an upgrade. Thankfully, nowdays, a Gigabit Ethernet is becoming pretty mainstream as well, so I didn’t really have to put too much money into the whole network. All the standard, mainstream, main boards come with a 10/100/1000Mbps network cards integrated, the 1000BASE-T switches and routers are also pretty cheap, so all I really needed was a new switch, a bunch of CAT6 cables (however, CAT5e would have been enough as well) and a bit of re-wiring. The new network topology can be seen at the top.
Hell yeah! After a few months of putting the gear together, getting all the paperwork done, installing the electrical and network cabling, we finally run our own data management platform with an added bonus of a private render farm. How cool is that?!
I’ve just finished updating the entire WordPress back-end of this blog to the latest version 2.9.1. So, if all went well, you should see absolutely no difference at all
I finally started building the render slaves for my studio. The first dedicated render node I built is based on basic mainstream parts, nothing fancy, but with enough power so that the render node does make sense to be placed in a rack installation.
The basic idea, obviously, was to build as powerful a machine, as possible for the lowest price tag, as possible. Since I’ve been an Intel user since, well forever, I based the machine on a Core i7 860 (Lynnfield) CPU, DDR3 memory and the rest is pretty much optional. But for my purposes, I wan every machine in the studio, to basically follow this idea of having a dedicated hard-drive, preferrably pretty fast, for the OS and a dedicated one for all the offline data. So, each machine, including the render nodes, will host a C: drive with all the software and programs on and a D: drive that’ll be setup to support all the files that we’ll work with. The workstation will have some other HDDs optionally, but these two drives are neccesary in order to rule out variables in the pipeline I’ve been building for a few months now.
The T-Systems guys were pretty quick! I didn’t expect them to show up this week, but they did. Kudos!
Anyways, the server room is finally plugged to the local central switch. It is not online yet, T-Systems will have to go through yet another buerocratic procedure prior to setting up the line. But, the hard work is done and all I need now is an electricity revision and the T-Systems green light.
We finally got to the first stage of installing and prepping the server room this weekend. The first stage was to get electricity to the computers. The problem with this is, firstly, the power drain, then the network connection. Unfortunately, I didn’t have any other choice but to place the server with the DAS and the render slaves in a storage room, located on the ground floor in the building my studio is placed in (former flat). The room is great since the server and the running machines don’t bother anybody, but, it’s not properly air conditioned, it didn’t have any electricity power (except for the light) and it wasn’t connected or even remotely being able to be connected to the LAN switch.
It seems that GPU accelerated rendering has been the hottest topic recently. But why should only the rendering get accelerated on the powerful GPUs?
There are tons of other applications that desperately need acceleration. Simulation for example. Cloth, hair, particles, rigid bodies etc… all need some heavy calculations and are actually quite simple, in comparison to rendering that is. Wouldn’t you prefer your cloth sims to be faster than real-time? Wouldn’t you love to be able to have physically accurate dynamics in your particle simulations? Or better yet, wouldn’t you love to be able to mix all this together in one mighty-powerful framework, all running on our GPUs? I would!
That’s right! I needed to convert a fairly complex, proprietary, LUT for preview purposes to one of the IRIDAS’ formats. I chose .ilut, since it’s a really simple, yet, extremely powerful format!
Let’s take a look at the syntax first. On the online documentation IRIDAS site, you’ll see all the different LUT formats FrameCycler supports, as well as the syntax for those files. The .ilut format is pretty flexible. It supports either an XML (ASCII) syntax, or an inrerpretable script format. For my complex LUT I used the XML format. The LUT wouldn’t be of much use to you as it’s used on a specific machine with a specific graphics card, specifically calibrated display and generally a different color workflow than the rest of the machines at the studio. But, to demonstrate the usefulness of this LUT file format, I’ll show you my sRGB LUT I created a while ago using the interpretable script syntax for previewing linear images (OpenEXR for example).
I bought an aging file server, HP ProLiant ML350 G-5, and I want to transform it into a modern workstation that’ll be used for compositing tasks (Nuke, Photoshop and maybe After Effects). However, I’ve ran into some issues. Mainly, the server doesn’t have a dedicated PCI-Express 16x slot used for modern graphics cards, it only sports 3 PCI-e 4x slots, which sucks (how much it suck I’ll know after some benchmarks). The server also has only one CPU, a quite old and slow one. And, only 2GB of DDR2 RAM. The RAM is the least of an issue, the CPUs are quite expensive to upgrade, so I’ll keep the one already in there for the time being. But the GPU is the bummer! I’ve documented an approx. 15 minutes long video of “hacking” a regular graphics card in the PCI-e 4x (actually of the 8x size) slot, for those curious ones out there (I sure was!)