From: "Warren Heaton" Received: from pv38p41im-ztdg02071201.me.com ([17.133.179.24] verified) by media-motion.tv (CommuniGate Pro SMTP 6.1.0) with ESMTPS id 7191539 for AE-List@media-motion.tv; Fri, 16 Nov 2018 17:19:59 +0100 Received: from process-dkim-sign-daemon.pv38p41im-ztdg02071201.me.com by pv38p41im-ztdg02071201.me.com (Oracle Communications Messaging Server 8.0.2.2.20180531 64bit (built May 31 2018)) id <0PIA00M00O9R9700@pv38p41im-ztdg02071201.me.com> for AE-List@media-motion.tv; Fri, 16 Nov 2018 16:32:44 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=04042017; t=1542385963; bh=ULxrXLzKbSrX5BQWjiwuNcAb2mlqo3j9PVIzeCTJXvA=; h=From:Content-type:MIME-version:Date:Subject:Message-id:To; b=oHDpDyTdKv17jzTAO6Ubot0fHc4XSPh2uKsqwRN4e6MzwWqtLTa8g2NlCSVAjogNG iwdCSzIDnoiSCvgAibQPLlyWhcmx+WPjnblUOd1KYRwZjWJC/brnyMAjZgnsRL5MtY DOTv1rvH8r9n1iETYHIgiQqal942vZvN9HJjmYcU+1TwVx1HxceqdDmILlAqLUxCF9 nH+31Zj8JCVtWo+jiNfUj76f3PncIdkqJWEYfWIzjn4ell9AaL9awyV9XMol4vJUYO 1lXt0QHe3EFBTtqfZX8dpJ3eYyv6wFDoZDMVNao8T+J+YnUFFO7pIAFpGSuRDIWifL D+3kljKOKNWQg== Received: from icloud.com ([127.0.0.1]) by pv38p41im-ztdg02071201.me.com (Oracle Communications Messaging Server 8.0.2.2.20180531 64bit (built May 31 2018)) with ESMTPSA id <0PIA00KQVOMIA710@pv38p41im-ztdg02071201.me.com> for AE-List@media-motion.tv; Fri, 16 Nov 2018 16:32:43 +0000 (GMT) X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1811160147 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-11-16_10:,, signatures=0 Content-type: multipart/alternative; boundary=Apple-Mail-EA2EB29B-848D-497E-812F-6DC2F5B61A8D Content-transfer-encoding: 7bit MIME-version: 1.0 (1.0) Date: Fri, 16 Nov 2018 08:32:42 -0800 Subject: Re: [AE] Constant hangs when switching between apps Message-id: <1EDFD6D5-5B72-48E5-9CCD-8AD5C33A0A1A@me.com> References: In-reply-to: To: After Effects Mail List X-Mailer: iPhone Mail (16B92) --Apple-Mail-EA2EB29B-848D-497E-812F-6DC2F5B61A8D Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable A properly configured SAN works pretty darn well across multiple workstation= s, but can still bog down during peak use. It=E2=80=99s when a NAS or SMB/AFP filesharing is used as if it=E2=80=99s a S= AN that things tend to go sideways. -Warren =20 Sent from my iPhone > On Nov 15, 2018, at 11:05 AM, Dennis Wilkins wro= te: >=20 > Working on one film with very unique specs for several years is very diffe= rent than 90-95% of production studios and agencies who may be turning over 1= -15 jobs per week and has multiple artists working on several projects simul= taneously (and possibly even hopping from machine to machine). Don=E2=80=99t= even start trying to figure out how to go about archiving these jobs if the= work files are all over the place - especially when using freelancers who h= ave different organization and work habits and may leave before the project i= s even completed. >=20 > Try integrating a render farm with multiple artists using the setup you=E2= =80=99re recommending... >=20 > Once you add in live action plates, editorial, mgfx, and vfx tasks and eve= n if you could possibly manage all of this you would cause nearly the same a= mount of network thrashing just trying to get artists the right source files= , not to mention always having to wait for the files to be synced to the rig= ht places. >=20 > The network would come to a standstill every time you started a new projec= t while trying to sync everything to the right destinations. >=20 > A better option in AE=E2=80=99s case is to have a large SSD for your Disk C= ache; you should then effectively be reducing network traffic but more intel= ligently and without the IT overhead. >=20 > Dennis Wilkins >=20 >=20 >=20 >=20 >> On Nov 15, 2018, at 10:24 AM, Stephen van Vuuren wrote: >>=20 >> > Yeah, I'm not sure how we'd effectively have a team of artists working o= n a bunch of different shots simultaneously on the same project if the files= had to be synced locally before they could use them.=20 >> =20 >> I did this many thousand of times on my IMAX film. It=E2=80=99s just gett= ing the workflow right and because you=E2=80=99re only copying a file once o= r twice across a LAN, instead of hundreds or thousands of times, the speed i= mprovement and cost reduction is very large. >> =20 >> >>using Render Garden to distribute a render that's going right to an edi= tor, etc. and all of that needs to happen immediately -- it doesn't work for= something to sync overnight.=20 >> =20 >> It=E2=80=99s just a question of what gets distributed where and when =E2=80= =93 and during my heavy years on the film, I had hundreds of syncs and mirro= rs happening a week =E2=80=93 dozens on busy days. >> =20 >> >> We sometimes even sync entire projects from our network drive to Dropb= ox so remote freelancers can work on them as well. >> =20 >> You just argued for working local solution. A remote freelancer is functi= onally identical to a network workstation which is remote from central serve= r. Just much greater latency and poorer bandwidth, so the issue is painfully= easy to see. But that does mean this issue vanishes when someone is inside t= he building. The identical structural issues are all there, just short enoug= h duration that we put up with it. >> =20 >> But just because something only takes seconds or minutes longer working n= etwork workstation then working locally in the most conservative examples, t= hat means over time, months, years, we are talking about massive effects on c= ost and performance as well increased risks of problems. >> =20 >> >> I'm all for things going faster, but that seems impossible without wor= king off of central storage on a network. I'd be curious what kind of infra= structure and workflow design could get around this. >> =20 >> Again, I=E2=80=99m in the final stages of my IMAX film. We had 100+ volun= teers but the bulk of the work was done here, primarily by me but due to the= slowness of working with the files, I normally had three workstations plus a= laptop running around me (while one machine was previewing frames, move to t= he other) so that=E2=80=99s 4 workstations plus 15 render boxes all renderin= g projects from the three workstation, 2 nearline servers and two archival s= ervers and 2 cloud backup services. >> =20 >> We have three main teams of volunteers all working remotely and using bot= h Dropbox and Gdrive to sync projects and assets but for some on slow connec= tions, FedEx and hard drive was the way to go. >>=20 >> Never was a file worked over the network. 33 Million files at peak, 700 T= B at peak. >>=20 >> The scope of how is both beyond email discussion and would vary widely ba= sed on the biz. >> =20 >> =20 >> From: After Effects Mail List =20 >> Sent: Thursday, November 15, 2018 11:30 AM >> To: After Effects Mail List >> Subject: Re: [AE] Constant hangs when switching between apps >> =20 >> > Then every VFX studio I've worked at, between 4 and 400 employees, is d= oing it wrong apparently. >> =20 >> Yeah, I'm not sure how we'd effectively have a team of artists working on= a bunch of different shots simultaneously on the same project if the files h= ad to be synced locally before they could use them. We're constantly updati= ng an asset, bringing in a new version of the 3D render to comp while it's s= till rendering, using Render Garden to distribute a render that's going righ= t to an editor, etc. and all of that needs to happen immediately -- it doesn= 't work for something to sync overnight. We sometimes even sync entire proj= ects from our network drive to Dropbox so remote freelancers can work on the= m as well. >> =20 >> I'm all for things going faster, but that seems impossible without workin= g off of central storage on a network. I'd be curious what kind of infrastr= ucture and workflow design could get around this. >> =20 >> On Wed, Nov 14, 2018 at 11:27 PM Brendan Bolles = wrote: >> =20 >> On Nov 14, 2018, at 12:01 PM, Stephen van Vuuren wrote: >> =20 >> Workstations working directly off network storage is slower, more expensi= ve, more prone to failure and a huge waste of time and money. >> =20 >> =20 >> Then every VFX studio I've worked at, between 4 and 400 employees, is doi= ng it wrong apparently. >=20 --Apple-Mail-EA2EB29B-848D-497E-812F-6DC2F5B61A8D Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable A properly configured SAN works pretty darn= well across multiple workstations, but can still bog down during peak use.<= div>
It=E2=80=99s when a NAS or SMB/AFP filesharing is used as= if it=E2=80=99s a SAN that things tend to go sideways.

=



-Warren




  
Sent from my iPhone

On Nov 15, 2018, at 11:05 AM, Dennis Wilkins <AE-List@media-motion.tv> wrote:

Working on one film with very uniq= ue specs for several years is very different than 90-95% of production studi= os and agencies who may be turning over 1-15 jobs per week and has multiple a= rtists working on several projects simultaneously (and possibly even hopping= from machine to machine). Don=E2=80=99t even start trying to figure out how= to go about archiving these jobs if the work files are all over the place -= especially when using freelancers who have different organization and work h= abits and may leave before the project is even completed.
Try integrating a render farm with multipl= e artists using the setup you=E2=80=99re recommending...

Once you add i= n live action plates, editorial, mgfx, and vfx tasks and even if you could p= ossibly manage all of this you would cause nearly the same amount of network= thrashing just trying to get artists the right source files, not to mention= always having to wait for the files to be synced to the right places.
=

The network would come t= o a standstill every time you started a new project while trying to sync eve= rything to the right destinations.

A better option in AE=E2=80=99s case is to have a large SSD= for your Disk Cache; you should then effectively be reducing network traffi= c but more intelligently and without the IT overhead.
<= br class=3D"">
Dennis Wilkins




On Nov 15,= 2018, at 10:24 AM, Stephen van Vuuren <AE-List@media-motion.tv> wrote:

> Yeah, I'm n= ot sure how we'd effectively have a team of artists working on a bunch of di= fferent shots simultaneously on the same project if the files had to be sync= ed locally before they could use them. 
 
I did this many thousand of times on my IMAX film. It=E2=80=99s just= getting the workflow right and because you=E2=80=99re only copying a file o= nce or twice across a LAN, instead of hundreds or thousands of times, the sp= eed improvement and cost reduction is very large.
 
>>using Render Garden to distribute a render that's going= right to an editor, etc. and all of that needs to happen immediately -- it d= oesn't work for something to sync overnight. 
 
It=E2=80=99s just a question of what gets distributed where an= d when =E2=80=93 and during my heavy years on the film, I had hundreds of sy= ncs and mirrors happening a week =E2=80=93 dozens on busy days.
 
=
>> We sometimes even sync entire projects f= rom our network drive to Dropbox so remote freelancers can work on them as w= ell.
&= nbsp;
You just argued for working loca= l solution. A remote freelancer is functionally identical to a network works= tation which is remote from central server. Just much greater latency and po= orer bandwidth, so the issue is painfully easy to see. But that does mean th= is issue vanishes when someone is inside the building. The identical structu= ral issues are all there, just short enough duration that we put up with it.=
 = ;
But just because something only take= s seconds or minutes longer working network workstation then working locally= in the most conservative examples, that means over time, months, years, we a= re talking about massive effects on cost and performance as well increased r= isks of problems.
 
>> I'm all fo= r things going faster, but that seems impossible without working off of cent= ral storage on a network.  I'd be curious what kind of infrastructure a= nd workflow design could get around this.
 
Again, I=E2=80=99m in the final stages of my IMAX film. We had 100+ volun= teers but the bulk of the work was done here, primarily by me but due to the= slowness of working with the files, I normally had three workstations plus a= laptop running around me (while one machine was previewing frames, move to t= he other) so that=E2=80=99s 4 workstations plus 15 render boxes all renderin= g projects from the three workstation, 2 nearline servers and two archival s= ervers and 2 cloud backup services.
 
W= e have three main teams of volunteers all working remotely and using both Dr= opbox and Gdrive to sync projects and assets but for some on slow connection= s, FedEx and hard drive was the way to go.

Never was a file worked over the network= . 33 Million files at peak, 700 TB at peak.

The scope of how is both beyond email di= scussion and would vary widely based on the biz.
=
 
 
From: After E= ffects Mail List <AE-List@media-motion.tv<= /a>> 
Sent: Thursd= ay, November 15, 2018 11:30 AM
To: After Effects Mail List <
AE-List@media-motion.tv>
Subject: Re: [= AE] Constant hangs when switching between apps
 
<= div class=3D"">
> Then every VFX studio I've wo= rked at, between 4 and 400 employees, is doing it wrong apparently.
 
Yeah, I'm not sur= e how we'd effectively have a team of artists working on a bunch of differen= t shots simultaneously on the same project if the files had to be synced loc= ally before they could use them.  We're constantly updating an asset, b= ringing in a new version of the 3D render to comp while it's still rendering= , using Render Garden to distribute a render that's going right to an editor= , etc. and all of that needs to happen immediately -- it doesn't work for so= mething to sync overnight.  We sometimes even sync entire projects from= our network drive to Dropbox so remote freelancers can work on them as well= .
 
I'm all for things going faster, but that seems impossible without workin= g off of central storage on a network.  I'd be curious what kind of inf= rastructure and workflow design could get around this.=
 
On Wed, Nov 14= , 2018 at 11:27 PM Brendan Bolles <AE-List= @media-motion.tv> wrote:
=  
On Nov 14, 2018, at 12:01 PM, Stephen van Vuuren <AE-List@media-motion.tv> wrote:
&= nbsp;
Workstations wor= king directly off network storage is slower, more expensive, more prone to f= ailure and a huge waste of time and money.
=
 
 
Then every VFX studio I'v= e worked at, between 4 and 400 employees, is doing it wrong apparently.

= = --Apple-Mail-EA2EB29B-848D-497E-812F-6DC2F5B61A8D--