1000BASE-T Ethernet upgrade

loocas | hardware,miscellaneous,technical | Saturday, January 30th, 2010

duber studio net topology

The entire studio ethernet ran on a standard 100Mbps line, 100BASE-T Fast Ethernet, which used to be enough. However, with the addition of a render farm and a fast centralized storage, I needed an upgrade. Thankfully, nowdays, a Gigabit Ethernet is becoming pretty mainstream as well, so I didn’t really have to put too much money into the whole network. All the standard, mainstream, main boards come with a 10/100/1000Mbps network cards integrated, the 1000BASE-T switches and routers are also pretty cheap, so all I really needed was a new switch, a bunch of CAT6 cables (however, CAT5e would have been enough as well) and a bit of re-wiring. The new network topology can be seen at the top.

Here are a few shots of the setup when I was working on it (I have a habbit of documenting a lot of stuff, when I come to think about it. :D ).

Network Upgrade

Network Upgrade

Network Upgrade

Network Upgrade

Network Upgrade

Network Upgrade

Anyways, we’re on a 1Gbps line all across the studio now and when I tested to copy a 14GB file from one PC to another the average data transfer speed was about 35MB/s, which took about 7 minutes or so. Not too bad. Obviously transferring tons of tiny files will always be an issue, but it should help with the larger ones, such as the scenes, source textures, renders, point caches etc…

6 Comments »

  1. I think you will have issues when you will render with about 5 nodes, and all of them read 10Gb max files and simulation files of 3gb per frame (if I work with you and use fume and tp ;) ) and then write 2k images with 15 elements. you need raid or even better stuff ( if this exists) with one ip and one ethernet cable per card. I heared that there is such stuff, but it’s expensive.

    the last studio I worked at had that issue. it’s not a everyday case…but once you get into that situation and have a tight deadline, you will look at me, and I will just smile and lift my shoulders ;)

    what they did was copying all the shit to the local drives of the render nodes. paiiiin!!

    Comment by goran — February 27, 2010 @ 03:18

  2. oh…other than that…nice work man!! looking forward to test that shiiiaaaat! :D

    Comment by goran — February 27, 2010 @ 03:20

  3. For sure, but I won’t have these issues for at least a year, or two. So… ;) By that time, I won’t have problems investing into BlueArc or Aberdeen systems, which are fucking expensive, but fucking high-end! :)

    Comment by loocas — February 27, 2010 @ 23:56

  4. Hi Lukas,

    In our studio we also have u just set up our server with rendernodes and a 1 gigabit network.
    We have a nic in our server that has two 1 gigabit ports and is capable of trunking (link aggregation) In this way it will be able to double the throughput form the server.
    But your switch’s must also be able to handle trunking.
    I did not get up to it to it yet to set all the settings for the trunking but i hope to get up to it this month.

    Best regards,

    Joost

    Comment by Joost — April 26, 2010 @ 18:56

  5. Hey, Joost,

    yeah, the network trunking is indeed a cool concept and a very viable one, since almost all servers have dual ethernet controllers by default. I haven’t setup mine just yet since I don’t need that much bandwidth, yet. But, I might give it a shot just for the heck of it :)

    Comment by loocas — April 26, 2010 @ 20:52

  6. vtech http://pwood5yyn8z.ACEHARDWAREE.INFO/tag/vtec+tv+vtech/ : vtec…

    vtech…

    Trackback by vtech — August 30, 2010 @ 07:02

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress | Theme by Roy Tanck