From: "Dennis Wilkins" Received: from biz127.inmotionhosting.com ([216.194.169.13] verified) by media-motion.tv (CommuniGate Pro SMTP 6.1.0) with ESMTPS id 7191343 for AE-List@media-motion.tv; Thu, 15 Nov 2018 19:52:21 +0100 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=reelsolutions.com; s=default; h=Message-Id:In-Reply-To:To:References:Date: Subject:Mime-Version:Content-Type:From:Sender:Reply-To:Cc: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ZkRRK8p+YQddtG19d8AIAfWDNQLWygP+aHOapRxwaPg=; b=SA28la7eNPKcKQ6FMQX8gva1G 08np8lUClAdKPhZ4R50MpVgrNbNoPnVhwzYAxeEKZsIQ///l09Drasn0DPEHY9n3ojzrDp+h2q2P3 chmNAyaETNxQxE/oGiXBvP7i1WL8i7msnfmSGOEYmrdJ9SR3xHvhPvX+hjbrNyxbMxaJQ=; Received: from [73.222.249.170] (port=34578 helo=[192.168.1.92]) by biz127.inmotionhosting.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.91) (envelope-from ) id 1gNMwm-00ApJJ-Sj for AE-List@media-motion.tv; Thu, 15 Nov 2018 11:05:07 -0800 Content-Type: multipart/alternative; boundary="Apple-Mail=_725BA89C-7579-48DE-874C-A719A2D92DF5" Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\)) Subject: Re: [AE] Constant hangs when switching between apps Date: Thu, 15 Nov 2018 11:05:00 -0800 References: To: After Effects Mail List In-Reply-To: Message-Id: <2CBAE603-165B-488E-B1A0-6586C3FC5799@reelsolutions.com> X-Mailer: Apple Mail (2.3445.9.1) X-OutGoing-Spam-Status: No, score=-1.0 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - biz127.inmotionhosting.com X-AntiAbuse: Original Domain - media-motion.tv X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - reelsolutions.com X-Get-Message-Sender-Via: biz127.inmotionhosting.com: authenticated_id: dennis@reelsolutions.com X-Authenticated-Sender: biz127.inmotionhosting.com: dennis@reelsolutions.com X-Source: X-Source-Args: X-Source-Dir: --Apple-Mail=_725BA89C-7579-48DE-874C-A719A2D92DF5 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Working on one film with very unique specs for several years is very = different than 90-95% of production studios and agencies who may be = turning over 1-15 jobs per week and has multiple artists working on = several projects simultaneously (and possibly even hopping from machine = to machine). Don=E2=80=99t even start trying to figure out how to go = about archiving these jobs if the work files are all over the place - = especially when using freelancers who have different organization and = work habits and may leave before the project is even completed. Try integrating a render farm with multiple artists using the setup = you=E2=80=99re recommending... Once you add in live action plates, editorial, mgfx, and vfx tasks and = even if you could possibly manage all of this you would cause nearly the = same amount of network thrashing just trying to get artists the right = source files, not to mention always having to wait for the files to be = synced to the right places. The network would come to a standstill every time you started a new = project while trying to sync everything to the right destinations. A better option in AE=E2=80=99s case is to have a large SSD for your = Disk Cache; you should then effectively be reducing network traffic but = more intelligently and without the IT overhead. Dennis Wilkins > On Nov 15, 2018, at 10:24 AM, Stephen van Vuuren = wrote: >=20 > > Yeah, I'm not sure how we'd effectively have a team of artists = working on a bunch of different shots simultaneously on the same project = if the files had to be synced locally before they could use them.=20 > =20 > I did this many thousand of times on my IMAX film. It=E2=80=99s just = getting the workflow right and because you=E2=80=99re only copying a = file once or twice across a LAN, instead of hundreds or thousands of = times, the speed improvement and cost reduction is very large. > =20 > >>using Render Garden to distribute a render that's going right to an = editor, etc. and all of that needs to happen immediately -- it doesn't = work for something to sync overnight.=20 > =20 > It=E2=80=99s just a question of what gets distributed where and when = =E2=80=93 and during my heavy years on the film, I had hundreds of syncs = and mirrors happening a week =E2=80=93 dozens on busy days. > =20 > >> We sometimes even sync entire projects from our network drive to = Dropbox so remote freelancers can work on them as well. > =20 > You just argued for working local solution. A remote freelancer is = functionally identical to a network workstation which is remote from = central server. Just much greater latency and poorer bandwidth, so the = issue is painfully easy to see. But that does mean this issue vanishes = when someone is inside the building. The identical structural issues are = all there, just short enough duration that we put up with it. > =20 > But just because something only takes seconds or minutes longer = working network workstation then working locally in the most = conservative examples, that means over time, months, years, we are = talking about massive effects on cost and performance as well increased = risks of problems. > =20 > >> I'm all for things going faster, but that seems impossible without = working off of central storage on a network. I'd be curious what kind = of infrastructure and workflow design could get around this. > =20 > Again, I=E2=80=99m in the final stages of my IMAX film. We had 100+ = volunteers but the bulk of the work was done here, primarily by me but = due to the slowness of working with the files, I normally had three = workstations plus a laptop running around me (while one machine was = previewing frames, move to the other) so that=E2=80=99s 4 workstations = plus 15 render boxes all rendering projects from the three workstation, = 2 nearline servers and two archival servers and 2 cloud backup services. > =20 > We have three main teams of volunteers all working remotely and using = both Dropbox and Gdrive to sync projects and assets but for some on slow = connections, FedEx and hard drive was the way to go. >=20 > Never was a file worked over the network. 33 Million files at peak, = 700 TB at peak. >=20 > The scope of how is both beyond email discussion and would vary widely = based on the biz. > =20 > =20 > From: After Effects Mail List >=20 > Sent: Thursday, November 15, 2018 11:30 AM > To: After Effects Mail List > > Subject: Re: [AE] Constant hangs when switching between apps > =20 > > Then every VFX studio I've worked at, between 4 and 400 employees, = is doing it wrong apparently. > =20 > Yeah, I'm not sure how we'd effectively have a team of artists working = on a bunch of different shots simultaneously on the same project if the = files had to be synced locally before they could use them. We're = constantly updating an asset, bringing in a new version of the 3D render = to comp while it's still rendering, using Render Garden to distribute a = render that's going right to an editor, etc. and all of that needs to = happen immediately -- it doesn't work for something to sync overnight. = We sometimes even sync entire projects from our network drive to Dropbox = so remote freelancers can work on them as well. > =20 > I'm all for things going faster, but that seems impossible without = working off of central storage on a network. I'd be curious what kind = of infrastructure and workflow design could get around this. > =20 > On Wed, Nov 14, 2018 at 11:27 PM Brendan Bolles = > wrote: > =20 > On Nov 14, 2018, at 12:01 PM, Stephen van Vuuren = > wrote: > =20 > Workstations working directly off network storage is slower, more = expensive, more prone to failure and a huge waste of time and money. > =20 > =20 > Then every VFX studio I've worked at, between 4 and 400 employees, is = doing it wrong apparently. --Apple-Mail=_725BA89C-7579-48DE-874C-A719A2D92DF5 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 Working on one film with very unique specs for several years = is very different than 90-95% of production studios and agencies who may = be turning over 1-15 jobs per week and has multiple artists working on = several projects simultaneously (and possibly even hopping from machine = to machine). Don=E2=80=99t even start trying to figure out how to go = about archiving these jobs if the work files are all over the place - = especially when using freelancers who have different organization and = work habits and may leave before the project is even completed.

Try integrating a render = farm with multiple artists using the setup you=E2=80=99re = recommending...

Once you add in live action plates, = editorial, mgfx, and vfx tasks and even if you could possibly manage all = of this you would cause nearly the same amount of network thrashing just = trying to get artists the right source files, not to mention always = having to wait for the files to be synced to the right places.

The network would come = to a standstill every time you started a new project while trying to = sync everything to the right destinations.

A better option in AE=E2=80=99s case is = to have a large SSD for your Disk Cache; you should then effectively be = reducing network traffic but more intelligently and without the IT = overhead.

Dennis= Wilkins




On = Nov 15, 2018, at 10:24 AM, Stephen van Vuuren <AE-List@media-motion.tv> wrote:

> = Yeah, I'm not sure how we'd effectively have a team of artists working = on a bunch of different shots simultaneously on the same project if the = files had to be synced locally before they could use them. 
 
I did = this many thousand of times on my IMAX film. It=E2=80=99s just getting = the workflow right and because you=E2=80=99re only copying a file once = or twice across a LAN, instead of hundreds or thousands of times, the = speed improvement and cost reduction is very large.
 
>>using Render Garden to distribute a render that's = going right to an editor, etc. and all of that needs to happen = immediately -- it doesn't work for something to sync = overnight. 
 
It=E2=80=99s just a question of what gets distributed where = and when =E2=80=93 and during my heavy years on the film, I had hundreds = of syncs and mirrors happening a week =E2=80=93 dozens on busy days.
 
>> = We sometimes even sync entire projects from our network drive to Dropbox = so remote freelancers can work on them as well.
 
You just = argued for working local solution. A remote freelancer is functionally = identical to a network workstation which is remote from central server. = Just much greater latency and poorer bandwidth, so the issue is = painfully easy to see. But that does mean this issue vanishes when = someone is inside the building. The identical structural issues are all = there, just short enough duration that we put up with it.
 
But just = because something only takes seconds or minutes longer working network = workstation then working locally in the most conservative examples, that = means over time, months, years, we are talking about massive effects on = cost and performance as well increased risks of problems.
 
>> = I'm all for things going faster, but that seems impossible without = working off of central storage on a network.  I'd be curious what = kind of infrastructure and workflow design could get around this.
 
Again, = I=E2=80=99m in the final stages of my IMAX film. We had 100+ volunteers = but the bulk of the work was done here, primarily by me but due to the = slowness of working with the files, I normally had three workstations = plus a laptop running around me (while one machine was previewing = frames, move to the other) so that=E2=80=99s 4 workstations plus 15 = render boxes all rendering projects from the three workstation, 2 = nearline servers and two archival servers and 2 cloud backup = services.
 
We have three main teams of volunteers all working remotely = and using both Dropbox and Gdrive to sync projects and assets but for = some on slow connections, FedEx and hard drive was the way to go.

Never was a file worked over the network. 33 Million files at = peak, 700 TB at peak.

The scope of how is both beyond email = discussion and would vary widely based on the biz.
 
 
From: After Effects Mail List = <AE-List@media-motion.tv> 
Sent: Thursday, November 15, 2018 = 11:30 AM
To: After Effects Mail List = <AE-List@media-motion.tv>Subject: Re: [AE] Constant hangs = when switching between apps
 
> Then = every VFX studio I've worked at, between 4 and 400 employees, is doing = it wrong apparently.
 
Yeah, I'm not sure how we'd effectively have a team of = artists working on a bunch of different shots simultaneously on the same = project if the files had to be synced locally before they could use = them.  We're constantly updating an asset, bringing in a new = version of the 3D render to comp while it's still rendering, using = Render Garden to distribute a render that's going right to an editor, = etc. and all of that needs to happen immediately -- it doesn't work for = something to sync overnight.  We sometimes even sync entire = projects from our network drive to Dropbox so remote freelancers can = work on them as well.
 
I'm all for things going faster, but that seems impossible = without working off of central storage on a network.  I'd be = curious what kind of infrastructure and workflow design could get around = this.
 
On Wed, Nov 14, 2018 at = 11:27 PM Brendan Bolles <AE-List@media-motion.tv> wrote:
 
On Nov 14, 2018, at 12:01 = PM, Stephen van Vuuren <AE-List@media-motion.tv> wrote:
 
Workstations working directly off network storage is slower, = more expensive, more prone to failure and a huge waste of time and = money.
 
 
Then every VFX studio I've worked at, between 4 and 400 = employees, is doing it wrong = apparently.
<= /div>
= --Apple-Mail=_725BA89C-7579-48DE-874C-A719A2D92DF5--