Hell yeah! After a few months of putting the gear together, getting all the paperwork done, installing the electrical and network cabling, we finally run our own data management platform with an added bonus of a private render farm. How cool is that?!
I finally started building the render slaves for my studio. The first dedicated render node I built is based on basic mainstream parts, nothing fancy, but with enough power so that the render node does make sense to be placed in a rack installation.
The basic idea, obviously, was to build as powerful a machine, as possible for the lowest price tag, as possible. Since I’ve been an Intel user since, well forever, I based the machine on a Core i7 860 (Lynnfield) CPU, DDR3 memory and the rest is pretty much optional. But for my purposes, I wan every machine in the studio, to basically follow this idea of having a dedicated hard-drive, preferrably pretty fast, for the OS and a dedicated one for all the offline data. So, each machine, including the render nodes, will host a C: drive with all the software and programs on and a D: drive that’ll be setup to support all the files that we’ll work with. The workstation will have some other HDDs optionally, but these two drives are neccesary in order to rule out variables in the pipeline I’ve been building for a few months now.
The T-Systems guys were pretty quick! I didn’t expect them to show up this week, but they did. Kudos!
Anyways, the server room is finally plugged to the local central switch. It is not online yet, T-Systems will have to go through yet another buerocratic procedure prior to setting up the line. But, the hard work is done and all I need now is an electricity revision and the T-Systems green light.
We finally got to the first stage of installing and prepping the server room this weekend. The first stage was to get electricity to the computers. The problem with this is, firstly, the power drain, then the network connection. Unfortunately, I didn’t have any other choice but to place the server with the DAS and the render slaves in a storage room, located on the ground floor in the building my studio is placed in (former flat). The room is great since the server and the running machines don’t bother anybody, but, it’s not properly air conditioned, it didn’t have any electricity power (except for the light) and it wasn’t connected or even remotely being able to be connected to the LAN switch.
I bought an aging file server, HP ProLiant ML350 G-5, and I want to transform it into a modern workstation that’ll be used for compositing tasks (Nuke, Photoshop and maybe After Effects). However, I’ve ran into some issues. Mainly, the server doesn’t have a dedicated PCI-Express 16x slot used for modern graphics cards, it only sports 3 PCI-e 4x slots, which sucks (how much it suck I’ll know after some benchmarks). The server also has only one CPU, a quite old and slow one. And, only 2GB of DDR2 RAM. The RAM is the least of an issue, the CPUs are quite expensive to upgrade, so I’ll keep the one already in there for the time being. But the GPU is the bummer! I’ve documented an approx. 15 minutes long video of “hacking” a regular graphics card in the PCI-e 4x (actually of the 8x size) slot, for those curious ones out there (I sure was!)
I finally received the Windows Server 2008 Standard (not R2) package via the terrible czech post service, so I can finally start to fully concentrate on the server side software development for my studio.
I’ll write about it some more later, when I actually have something worthy showing off
I’ve finally received all the parts I needed in order to setup the PowerVault MD1000 in a RAID-5 configuration (5 x 500GB HDDs). I even made a video documenting the whole process, but it’s in Czech language only, at the moment. I don’t know when/if I get to narrate it in English though. For those of you who understand, enjoy!
I’ve finally found some spare time to dive into the server installation. I’ve chosen, after a bit of an evaluation, Windows Server 2008 R2 Standard (couldn’t have chosen Linux for several reasons and this version of Windows was the best choice). However, the installation process isn’t as easy and boring as it is on any regular PC, the server is a bit of a proprietary mash of partly commercially available parts and thus requires a little bit more elaborate approach.
The very first time I booted up the server I had to flash and update its SYSTEM and the PERC BIOSes at first, because they were of a bit older version that wasn’t officially supported by Dell.
I’ve been using tons of external hard drives for storing my data, backup needs or for carrying certain large files on me.
But with larger projects grew the needs for something much more advanced. So I finally bought a centralized data storage, a DAS (Direct-Attached Storage) from DELL. Concretely a MD1000, which is an extensional disk array that gets connected directly to a server, via a SAS controller, and acts as a physical drive on that server with up to, in this case, 7.5TB of redundant storage!
duber studio and its projects (duber.tv, this blog, mycirneco.com, chargethedragon.com and various other hosted services) have today successfully migrated all the databases to its own server, Dell PowerEdge. We’ve been running our own server since July 2007, however, most of our databases were still being kept over at our hosting partner, Hokosoft s.r.o., which still manages our server housing and maintenance. This move means a bit faster db transfers (most likely not noticeable to the visitors of our sites), but in terms of administration and management, it’s a much welcomed move as we can now start to integrate our vfx pipeline and management tools to a remote server for faster and smoother collaboration.