From: "Jim Curtis" Received: from sonic303-10.consmr.mail.bf2.yahoo.com ([74.6.131.49] verified) by media-motion.tv (CommuniGate Pro SMTP 6.1.0) with ESMTP id 7191902 for AE-List@media-motion.tv; Fri, 16 Nov 2018 19:00:36 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flash.net; s=s2048; t=1542392005; bh=EzdMXwxKgdCsUnP3iw6LwSWWxJzy7dmRYh5/pI6m+J0=; h=From:Subject:Date:References:To:In-Reply-To:From:Subject; b=h7Ri+9u/HC0oYJChmsuWwcLqJc40KzhEQZnleMPc1eCHLfOYuSWKbmI9VTt48wftRTed8+JaOXq51+y0JqO3yA0ZuIbxqDuzWFhBXgzyoWJ/mahzvFWJPwRMYSJHME1B4j6VnOYigEhgbvXqfHBAvb54ss7KnhEAEp1Ad917vkDcGEer2hGSj98g7/4gOZv3IEUp7SGRWAmM+POn+ZENxldz9xVUsl0VNvJZo5aiPTq8LOz35SB7OkqmrbleSyS1APPqY+LpCyvxKSElRh2w6PcafRuN/V81SeU7a/umvb/i2HQbRD+Eoo2UtT1+rvN87yWHXdeT3kDc3T9xlqyyVw== X-YMail-OSG: tMRLY5YVM1ldHA93VzocXxtz0HsJ6lBh9MmQ1piPlFZmIvoFl_XkHD.P33TM7ES 5_qa1IMBaP9OQl6eYSkOGClY.spC_OTQSMK6LgDt1iQstY9SgUHoJCsYo2XxZ_ZcuvNNpZk7zBBF c3ItJeTNQVm71nCfCYUQUk9Gw2xgnjDD0go8YuhMQnB3KGeO7yyYkVGkUjj.61b9dLikLtyuaFS9 3P1YS.mBpFil3xp3UC.sDKLAnq1qfxYaIQDIIBwHOgsIn9zt0uNEaS7F25RFQ6YsjOW33840n7_Q wdMfQtkBBTLeqWswoBNsemcwBReBjyfkyG1TJ5w3SKMAsy8jJkkQ_lcW9Qg9d5cAyPED1IkGTGus .pQcupEyVue6FWQHCrTCHCUICDygoUUeLESkUz3dtz15On5BKKHZbaOkYHhqrFfx5vVPwqmoDtph 3uOjrEF6WlHGat_dfsWOKhD4lZGr.uykK_wBe8FxFE70y06OeWr4JkEaiG1LF0.sISfsTn7tTpmg G7ybyDs7raTSJEK72VR_8MvrUvIoWKwUuA4ybw3jKmgzWisbFBm2uhU8F0ZHoAIHfgmEjQ75dG_q 1LMHTLLTBRs19J7iDgcHgfw0ebbj3ZtzeV5DsyiWpFRvuJmk5jLDvLj.jNOYo6f4jQsqKSMsttAY VyV7jy3YG7_xcyoplGnuT5PV7wWe780xF57ZSyU68PoZwaUi1.dx8yCmtnSw3i5.rXgE0C1y.VEU cHEnmpRT2pGq2c4vsLGX3tBIgKIuK578mHYpXY17LvCKyzLnSguNP8ovHl25ErwEWuvbn72z3DBa gzf3iC4REaArZ_BfM99y3x66hXgdrpVg3PvXVFKrnFkDGG0.cmImF8ggJ_j8JOrdhPJDlGSjavd5 2Ny2P_h8tltgz.YY.EotEn1iBooxBgXdimldGzFH3.rXcZnZstpls3uXhfkvDIroG7vn6k3TE4JP U32lTjqHH66SHCLYHYjpeXnC4lfUa.F_tG1xGEqHl.mAIdgh9tFfWjpzQVuie2YVEvWj1CjkV8qa cTLRUM0TzBFj4gLOaLMvxQUR4uDkKqKsXTuBMR7NH0T_sE7ZnMljR4dHEjfFlQXXUyPEn7M3Wah8 xd.6x Received: from sonic.gate.mail.ne1.yahoo.com by sonic303.consmr.mail.bf2.yahoo.com with HTTP; Fri, 16 Nov 2018 18:13:25 +0000 Received: from 104-48-143-159.lightspeed.rcsntx.sbcglobal.net (EHLO mac-pro.attlocal.net) ([104.48.143.159]) by smtp430.mail.bf1.yahoo.com (Oath Hermes SMTP Server) with ESMTPA ID 02fdd5c9a3e971340dbf1082087c0249 for ; Fri, 16 Nov 2018 18:13:21 +0000 (UTC) Content-Type: multipart/alternative; boundary="Apple-Mail=_BB3914BB-9733-4CF7-932E-924E018D3689" Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\)) Subject: Re: [AE] Constant hangs when switching between apps Date: Fri, 16 Nov 2018 12:13:19 -0600 References: To: After Effects List In-Reply-To: Message-Id: <7E7F1DFE-F315-42C0-AFC6-3E192EFD51BD@flash.net> X-Mailer: Apple Mail (2.3445.9.1) --Apple-Mail=_BB3914BB-9733-4CF7-932E-924E018D3689 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 I thought we were supposed to be editing 8K off the cloud by now. > On Nov 16, 2018, at 11:34 AM, Stephen van Vuuren = wrote: >=20 > > A properly configured SAN works pretty darn well across multiple = workstations, but can still bog down during peak use. > =20 > Which is a failure of engineering 101 =E2=80=93 you design = infrastructure to handle Max Q i.e. beyond peak use, not for normal = traffic. Sure, this basic principle is violated all the time in the = world and creates a lot of problems (DC Beltway), but in the case of = storage, there is no reason other than it=E2=80=99s expedient. It=E2=80=99= s certainly not cheaper or fasters. > =20 > Best, > =20 > stephen van vuuren > 336.202.4777 > =20 > http://www.insaturnsrings.com/ > http://www.sv2studios.com/ > http://www.sv2dcp.com/ > =20 > A film is =E2=80=93 or should be =E2=80=93 more like music than like = fiction. It should be a progression of moods and feelings. The theme, = what=E2=80=99s behind the emotion, the meaning, all that comes later. > =E2=80=93Stanley Kubrick > =20 > From: After Effects Mail List >=20 > Sent: Friday, November 16, 2018 11:33 AM > To: After Effects Mail List > > Subject: Re: [AE] Constant hangs when switching between apps > =20 > A properly configured SAN works pretty darn well across multiple = workstations, but can still bog down during peak use. > =20 > It=E2=80=99s when a NAS or SMB/AFP filesharing is used as if it=E2=80=99= s a SAN that things tend to go sideways. > =20 > =20 > =20 > =20 > -Warren > =20 > =20 > =20 > =20 > =20 > Sent from my iPhone >=20 > On Nov 15, 2018, at 11:05 AM, Dennis Wilkins > wrote: >=20 > Working on one film with very unique specs for several years is very = different than 90-95% of production studios and agencies who may be = turning over 1-15 jobs per week and has multiple artists working on = several projects simultaneously (and possibly even hopping from machine = to machine). Don=E2=80=99t even start trying to figure out how to go = about archiving these jobs if the work files are all over the place - = especially when using freelancers who have different organization and = work habits and may leave before the project is even completed. > =20 > Try integrating a render farm with multiple artists using the setup = you=E2=80=99re recommending... > =20 > Once you add in live action plates, editorial, mgfx, and vfx tasks and = even if you could possibly manage all of this you would cause nearly the = same amount of network thrashing just trying to get artists the right = source files, not to mention always having to wait for the files to be = synced to the right places. > =20 > The network would come to a standstill every time you started a new = project while trying to sync everything to the right destinations. > =20 > A better option in AE=E2=80=99s case is to have a large SSD for your = Disk Cache; you should then effectively be reducing network traffic but = more intelligently and without the IT overhead. > =20 > Dennis Wilkins > =20 > =20 > =20 > =20 > On Nov 15, 2018, at 10:24 AM, Stephen van Vuuren = > wrote: > =20 > > Yeah, I'm not sure how we'd effectively have a team of artists = working on a bunch of different shots simultaneously on the same project = if the files had to be synced locally before they could use them.=20 > =20 > I did this many thousand of times on my IMAX film. It=E2=80=99s just = getting the workflow right and because you=E2=80=99re only copying a = file once or twice across a LAN, instead of hundreds or thousands of = times, the speed improvement and cost reduction is very large. > =20 > >>using Render Garden to distribute a render that's going right to an = editor, etc. and all of that needs to happen immediately -- it doesn't = work for something to sync overnight.=20 > =20 > It=E2=80=99s just a question of what gets distributed where and when = =E2=80=93 and during my heavy years on the film, I had hundreds of syncs = and mirrors happening a week =E2=80=93 dozens on busy days. > =20 > >> We sometimes even sync entire projects from our network drive to = Dropbox so remote freelancers can work on them as well. > =20 > You just argued for working local solution. A remote freelancer is = functionally identical to a network workstation which is remote from = central server. Just much greater latency and poorer bandwidth, so the = issue is painfully easy to see. But that does mean this issue vanishes = when someone is inside the building. The identical structural issues are = all there, just short enough duration that we put up with it. > =20 > But just because something only takes seconds or minutes longer = working network workstation then working locally in the most = conservative examples, that means over time, months, years, we are = talking about massive effects on cost and performance as well increased = risks of problems. > =20 > >> I'm all for things going faster, but that seems impossible without = working off of central storage on a network. I'd be curious what kind = of infrastructure and workflow design could get around this. > =20 > Again, I=E2=80=99m in the final stages of my IMAX film. We had 100+ = volunteers but the bulk of the work was done here, primarily by me but = due to the slowness of working with the files, I normally had three = workstations plus a laptop running around me (while one machine was = previewing frames, move to the other) so that=E2=80=99s 4 workstations = plus 15 render boxes all rendering projects from the three workstation, = 2 nearline servers and two archival servers and 2 cloud backup services. > =20 > We have three main teams of volunteers all working remotely and using = both Dropbox and Gdrive to sync projects and assets but for some on slow = connections, FedEx and hard drive was the way to go. >=20 > Never was a file worked over the network. 33 Million files at peak, = 700 TB at peak. >=20 > The scope of how is both beyond email discussion and would vary widely = based on the biz. > =20 > =20 > From: After Effects Mail List >=20 > Sent: Thursday, November 15, 2018 11:30 AM > To: After Effects Mail List > > Subject: Re: [AE] Constant hangs when switching between apps > =20 > > Then every VFX studio I've worked at, between 4 and 400 employees, = is doing it wrong apparently. > =20 > Yeah, I'm not sure how we'd effectively have a team of artists working = on a bunch of different shots simultaneously on the same project if the = files had to be synced locally before they could use them. We're = constantly updating an asset, bringing in a new version of the 3D render = to comp while it's still rendering, using Render Garden to distribute a = render that's going right to an editor, etc. and all of that needs to = happen immediately -- it doesn't work for something to sync overnight. = We sometimes even sync entire projects from our network drive to Dropbox = so remote freelancers can work on them as well. > =20 > I'm all for things going faster, but that seems impossible without = working off of central storage on a network. I'd be curious what kind = of infrastructure and workflow design could get around this. > =20 > On Wed, Nov 14, 2018 at 11:27 PM Brendan Bolles = > wrote: > =20 > On Nov 14, 2018, at 12:01 PM, Stephen van Vuuren = > wrote: > =20 > Workstations working directly off network storage is slower, more = expensive, more prone to failure and a huge waste of time and money. > =20 > =20 > Then every VFX studio I've worked at, between 4 and 400 employees, is = doing it wrong apparently. --Apple-Mail=_BB3914BB-9733-4CF7-932E-924E018D3689 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 I = thought we were supposed to be editing 8K off the cloud by now.



On Nov = 16, 2018, at 11:34 AM, Stephen van Vuuren <AE-List@media-motion.tv> wrote:

> A = properly configured SAN works pretty darn well across multiple = workstations, but can still bog down during peak use.
 
Which is = a failure of engineering 101 =E2=80=93 you design infrastructure to = handle Max Q i.e. beyond peak use, not for normal traffic. Sure, this = basic principle is violated all the time in the world and creates a lot = of problems (DC Beltway), but in the case of storage, there is no reason = other than it=E2=80=99s expedient. It=E2=80=99s certainly not cheaper or = fasters.
 
Best,
 
stephen van vuuren
336.202.4777
 
 
A film is =E2=80=93 or should be = =E2=80=93 more like music than like fiction. It should be a progression = of moods and feelings. The theme, what=E2=80=99s behind the emotion, the = meaning, all that comes later.
=E2=80=93Stanley = Kubrick
 
From: After Effects Mail List = <AE-List@media-motion.tv> 
Sent: Friday, November 16, 2018 = 11:33 AM
To: After Effects Mail List = <AE-List@media-motion.tv>Subject: Re: [AE] Constant hangs = when switching between apps
 
A properly configured SAN works pretty = darn well across multiple workstations, but can still bog down during = peak use.
 
It=E2=80=99s when a NAS or SMB/AFP = filesharing is used as if it=E2=80=99s a SAN that things tend to go = sideways.
 
 
 
 
-Warren
 
 
 
 
  
Sent from my iPhone


On = Nov 15, 2018, at 11:05 AM, Dennis Wilkins <AE-List@media-motion.tv> = wrote:

Working on one film with very unique specs for several years = is very different than 90-95% of production studios and agencies who may = be turning over 1-15 jobs per week and has multiple artists working on = several projects simultaneously (and possibly even hopping from machine = to machine). Don=E2=80=99t even start trying to figure out how to go = about archiving these jobs if the work files are all over the place - = especially when using freelancers who have different organization and = work habits and may leave before the project is even completed.
 
Try integrating a render farm with = multiple artists using the setup you=E2=80=99re recommending...
 
Once you add in live action plates, editorial, mgfx, and vfx = tasks and even if you could possibly manage all of this you would cause = nearly the same amount of network thrashing just trying to get artists = the right source files, not to mention always having to wait for the = files to be synced to the right places.
 
The network would come to a standstill = every time you started a new project while trying to sync everything to = the right destinations.
 
A better option in AE=E2=80=99s case is to have a large SSD = for your Disk Cache; you should then effectively be reducing network = traffic but more intelligently and without the IT overhead.
 
Dennis Wilkins
 
 
 
 
On Nov = 15, 2018, at 10:24 AM, Stephen van Vuuren <AE-List@media-motion.tv> = wrote:
 
> Yeah, I'm not sure = how we'd effectively have a team of artists working on a bunch of = different shots simultaneously on the same project if the files had to = be synced locally before they could use them. 
 
I did this many thousand of times on my = IMAX film. It=E2=80=99s just getting the workflow right and because = you=E2=80=99re only copying a file once or twice across a LAN, instead = of hundreds or thousands of times, the speed improvement and cost = reduction is very large.
 
>>using Render Garden to distribute a render that's = going right to an editor, etc. and all of that needs to happen = immediately -- it doesn't work for something to sync = overnight. 
 
It=E2=80=99s just a question of what gets distributed where = and when =E2=80=93 and during my heavy years on the film, I had hundreds = of syncs and mirrors happening a week =E2=80=93 dozens on busy days.
 
>> We sometimes even sync entire = projects from our network drive to Dropbox so remote freelancers can = work on them as well.
 
You just argued for working local solution. A remote = freelancer is functionally identical to a network workstation which is = remote from central server. Just much greater latency and poorer = bandwidth, so the issue is painfully easy to see. But that does mean = this issue vanishes when someone is inside the building. The identical = structural issues are all there, just short enough duration that we put = up with it.
 
But just because something only takes seconds or minutes = longer working network workstation then working locally in the most = conservative examples, that means over time, months, years, we are = talking about massive effects on cost and performance as well increased = risks of problems.
 
>> I'm all for things going faster, but that seems = impossible without working off of central storage on a network.  = I'd be curious what kind of infrastructure and workflow design could get = around this.
 
Again, I=E2=80=99m in the final stages of my IMAX film. We = had 100+ volunteers but the bulk of the work was done here, primarily by = me but due to the slowness of working with the files, I normally had = three workstations plus a laptop running around me (while one machine = was previewing frames, move to the other) so that=E2=80=99s 4 = workstations plus 15 render boxes all rendering projects from the three = workstation, 2 nearline servers and two archival servers and 2 cloud = backup services.
 
We have three main teams of volunteers all working remotely = and using both Dropbox and Gdrive to sync projects and assets but for = some on slow connections, FedEx and hard drive was the way to go.

Never was a file worked over the network. 33 = Million files at peak, 700 TB at peak.

The scope of how is both beyond email = discussion and would vary widely based on the biz.
 
 
From: After Effects Mail List = <AE-List@media-motion.tv> 
Sent: Thursday, November 15, 2018 = 11:30 AM
To: After Effects Mail List = <AE-List@media-motion.tv>
Subject: Re: [AE] Constant hangs = when switching between apps
 
> Then every VFX studio = I've worked at, between 4 and 400 employees, is doing it wrong = apparently.
 
Yeah, I'm not sure how we'd effectively have a team of = artists working on a bunch of different shots simultaneously on the same = project if the files had to be synced locally before they could use = them.  We're constantly updating an asset, bringing in a new = version of the 3D render to comp while it's still rendering, using = Render Garden to distribute a render that's going right to an editor, = etc. and all of that needs to happen immediately -- it doesn't work for = something to sync overnight.  We sometimes even sync entire = projects from our network drive to Dropbox so remote freelancers can = work on them as well.
 
I'm all for things going faster, but = that seems impossible without working off of central storage on a = network.  I'd be curious what kind of infrastructure and workflow = design could get around this.
 
On Wed, Nov 14, 2018 at = 11:27 PM Brendan Bolles <AE-List@media-motion.tv>= wrote:
 
On Nov = 14, 2018, at 12:01 PM, Stephen van Vuuren <AE-List@media-motion.tv> wrote:
 
Workstations working = directly off network storage is slower, more expensive, more prone to = failure and a huge waste of time and money.
 
 
Then every VFX studio I've worked at, = between 4 and 400 employees, is doing it wrong = apparently.
<= /div>

= --Apple-Mail=_BB3914BB-9733-4CF7-932E-924E018D3689--