Wednesday, July 17, 2013

Editing The Choir video from Audio Feed Festival

In theory the quick way to get through this is to import the footage and edit one song per band, not entire concerts. That is the goal.

It would be nice however to keep the imported clips and do more extensive edits for the bands, at least those which I have decent video clips of.

The Audio Feed Festival video was different than the Gallery video. Arkansas Stage vs Gallery stage.

Arkansas stage:
Smaller, but in a building
Lower cost lighting, which was better for 3d cameras to eliminate rolling shutter.
More bright spots from par lights being to bright especially on the main singer location.

The Gallery Stage:
Better acoustics for recording because it was in a tent
This is both a blessing and a curse, being in a tent. It's better because there are less sound reflections, it's worse because you can pick up more outside noise.
Better lighting, better fill light control.
However LED lighting which had much more effects was causing rolling shutter problems for my Fujix 3d camera and some other video shooters out there. Fortunately for me, the Sony camcorders in 2d didn't have a problem with the led lighting. This is because high frequency led lighting for video costs a ton of money and many cannot afford to use that for concerts.

That's about it for the comparisons.

Now for the editing issues.

First my video clips are far from complete.

My biggest problem was not getting to Audio Feed Festival early enough, missing the first official day and only being there one day definitely didn't help. Not actually talking with the other guy shooting video for the festival didn't help much. I had to take a leap of faith and hope he got the variety video clips I could not get due to the commitment to multi camera with a crew of one.

It's difficult to get interview and behind the scenes video if you are on a rush deadline, and not having sufficient power adapters messed up my equipment setup and left me with a battery shuffle and charge situation.

So those were the excuses for not having more video.

I know at least one videographer who took video from his own fan experience point of view, so perhaps some of that video will work well for promo video. He shot a lot of video. He's a great shooter getting a lot of footage. I kind of envy the freedom of being able to run and gun as I've done this myself at times. But when your anchored to a bunch of equipment for a multi camera shoot you lose those freedoms.

The video is a bit of a challenge to edit with the Media 100 v1.13 on the Macintosh. The editor is not bad, it's rather fun to edit with and it has great audio controls and the ability to color correct.

The biggest problem is it cannot ingest and edit with Avchd video directly. I have to do an import and to make things easy at least for me, I've found that importing through IMovie works best, then I quickly import the QuickTime file into Media 100.

The problem is a QuickTime file to edit with at full hd resolution takes up a ton of space compared to compressed avchd Sony video clips. The Sony Video clip might take perhaps 2 gigs for an hour depending on the resolution setting and compression settings. I use the mid level hd resolution for the files, which is hd but not as high a data rate as a full 1920 by 1080 recording. To save space I'm recording at 1440 by 1080. Some might wonder why not full hd at 1920. Well I'm not actually running the cameras with a crew, and I don't really have all the control necessary to get a very good shoot done, I'm kind of hacking through this with unmanned cameras preset and fixed. Without a known quality going into the below line production, it's probably a waste of space and power to run full hd at the highest quality and data rate.

Also most of my video is for a very few clients, I shouldn't even use that as a term as I'm not charging them and I'm not hired by them. It's free video for the bands and promotors. It will probably be used or viewed at normal sd 720p resolution, of a normal DVD. You will not be seeing me burn a blue ray disk at full hd from this, because frankly the Macintosh is not setup for blue ray unless you are playing around with very expensive Adobe software.

Now the files become huge as QuickTime files for editing. They take up close to 200 gigs for four cameras for a short one hour concert. "The Choir" four camera sources become something a little north of 200 gigs in size. I need those files to edit quickly in Media 100.  They also take time to "render" to quicktime, and are converted in roughly real time on my MacBook to the QuickTime files. This means four hours of video from three or four cameras shoot will take four to six hours to import.  That's four hours just to get ready to start editing.   So you can imagine it will take maybe two weeks to just import all the video I shot, and that's if  have the disk space.

Imagine something like 3 terabytes of source video on one or two USB drives connected just to start editing. Also if I "color correct" video, the intermediate files will have to be rendered as well. That could easily take almost as much space as at least one source angle.  If the clips are split and trimmed and rendered from the timeline, they will take up as much as at least one additional camera angle.  This depends on the resolution setting for the rendered video as well.  One can get by with lower resolution or different compressions and the end user will likely not see a difference.

SOME DETAILS WITH MEDIA 100 and Macintosh (skip this section for a quicker read) Certainly you don't want to use high end Apple Pro Resolutions for the color corrected clips.  With concert video the entire clip may be color corrected.  I'm not talking about sophisticated grading of each take, but just basic correction for overall lighting for the entire camera angle.  Any color correction will require a rendering of the entire clip.   (In the old days, earlier Media 100's could play out the video as if it was going through a "pro amp" color correction system.  If the computer was fast enough you could play out and master you video without rendering, but you might get stuttering with the added color correction overhead.)   The earlier years of editing we often put out a clip and ripped it to tape.  So we would perhaps even play out the video from the Media 100 and needed no dropped frames.  What's surprising is, we needed faster and better throughput for "real time" playback performance in the old days.  Nowdays the drives can be faster and have more throughput, but they don't have to be faster.  They may be faster, but we can get by with slower performance and basically with the Macintosh and Media 100 use basic USB drives.  The reason is we end up rendering out the final result, so we don't need real time performance with multiple HD streams.  We can edit many streams of HD from a basic laptop and USB drive with Media 100.  The video may stutter a little during editing, but the Media 100 will keep working and you can make your trims and edits and then render it out without stuttering to the final quicktime file, which plays fine.   I really enjoy the stability and ease of use of the Media 100.  The Multicam feature lets me edit very quickly, but unfortunately there is a bit of a glitch at times in lining up all the video, sometimes one of the clips audio channels will go out of sync in Multiclip mode.  I don't know why, but I'm able to work around this by editing video only and not cutting audio while making multicam edit decisions.  The audio for a concert comes from only one source anyways.  The audio in Media 100 for multicam editing comes from each camera or one of them.

BACK TO THE GENERAL TOPIC - Disk Space, and editing speed (Media 100 vs. Edius)
I edit to a full 1920 by 1080 frame anyway. So the video looks stunning on an hd when I play the QuickTime master output. I edit to this just in case I want to do a blue ray in the future.

Here is what slows down my editing. I'm trying to keep a copy of the earlier editing source. So I currently have four portable USB drives that are full of Cornerstone video from previous edits.  And each time I edit two bands another 1.5 or 2 terabyte USB drive is needed.

I can put the files on another permanent drive at home and have a few 2 or 3 terabyte desktop sized drives for that..  But those drives run out of space and I still have to buy drives to save the Edit source files which were rendered.  I can rerender them and start all over if I want to, but that would make future edits take longer.  

So let's imagine we go to the store and purchase a USB drive.  A seagate 1.5 or 2 tb  drive costs $140 or more depending on when you buy them.

So it's costing me about $40 to $50 per band just to have disk space for all their raw footage.

(Of course it's convenient but expensive to acquire on SIMM chips as well, that's another topic.)
It's cheaper than the old days to shoot the video, SIMM chips are cheaper than tape and take up less space, but they still cost some money.   What I really like about the new digital technology is I can store all my SIMM chips from a shoot like this on a fairly cheap $50 USB drive.  That's a lot cheaper than spending hundreds of dollars for tapes and takes up less space.  (I can store the USB drive in a safety deposit box if I want extra security and a great safe data storage site.)

- Editing in Quicktime however is a pain (in drive space and time.)
The acquisition of the video  is performed from SIMM chips and the original source is much smaller in Avchd format. I have all of the Cornerstone source on one 320 tb drive. Cheap really for such a large amount of video. It's easy to hand that drive to JPUSA and it will take up a lot less space than a box of video and audio tapes.  I have all the Audio Feed Festival video on one 500 tb drive but it could be much smaller under 150 tb drive.

Now if I was to add up these bands. Let's say I have 10 acts from Cornerstone and 5 from Audio Feed, I actually have many more but let's just say we shot 15 bands. Those fifteen bands will take $750 in disk drive space.  This because of the old MacBook and Media 100 editing system and the need for the quicktime intermediate work files.  I'm on that path right now.  And you can imagine, if it could cost you $700 to $1000 in added equipment for flexible editing of a couple of festivals.  Well that cost is a significant expense, especially for "free video".

- The 3D option.
I wanted to shoot and edit 3d video this year.  I almost shot the Audio Feed Festival in 3d this year. That would have required a pretty large camera expense, so I didn't do it.  Also 3d is different than 2d and if you shoot in 3d but use 2d techniques your 3d video will suffer and probably give your viewers air sickness when they view it.  So you really can't shoot exciting 2d video for cutting and artistic editing and they use that same video (recorded in 3d as well on a 3d camcorder) for 3d use.  You have to actually shoot more boring video for 3d because that depth will add something and you don't want background horizons shifting.  So to do a 2d and 3d shoot and pull both off.  I'd really be even better off shooting with both sets of cameras.  Have a setup for 2d and a setup for 3d.  Imagine manning those.  Let's imagine that for a moment.  Let's say I have 6 camera operators and helpers as well.  That would be perhaps three 2d shooters and three 3d shooters.  And I'll need to have backup folks that can man a camera as well to give them a break.  Nobody can expect others to do the amount of grueling shooting I'd be doing alone for the love of it, even if they were being paid top dollar.  Most camera operators will not do a 12 hour shoot with most of that time doing the shooting.  How many bands would play a marathon 10 hours?  If the bands won't do it, then the video people won't likely do it.  This is different than a guy with a camcorder on a tripod who sits for the entire concert.  I'm talking about active camera operators and really the style which we developed or tried to required a lot of creativity from each camera operator who shot independently of "direction" from a director.  We didn't have a director who "directed what each operator should shoot" but rather had each operator work independently and have a "goal" to cover certain aspects of the show.  So for a 3 camera shoot I'd really need more operators than six if I was shooting 3d and 2d.  Now I'd rather shoot with two good camera operators than ten lousy operators who don't know what they are doing.  The worse thing you can do is shoot a ton of footage and have all of it bad.  It makes editing a depressing nightmare and you often can't get even decent memory video.  This is one of the problems with a typical trained "public access" cable camera operator.  They are trained to be robots and do what they are told.  They have no creativity or independent thought, indeed if they had any it would be drummed out of them with commands from the director.  Videographers or camera operators or even band members can shoot better concert video than a typical "camera operator" trained at a cable station for "public access".

Well obviously I can't get decent video without a team.  And I couldn't even drum up one of the old team for the Audio Feed Festival.  Even offering to pay them didn't work.

- LET'S pretend I shot the festival in 3d with a bunch of 3d camcorder and good tripods.
To edit in 3d I need to use a PC or run software on windows. I could use Sony Vegas Pro or Grass Valley Edius to edit 3d video.

WHY EDIUS?
In looking at the two and doing a little bit of research, I think I'd go with the Grass Valley Edius approach. It can edit 3d and seems to have a very fast intuitive interface. It may be a little more clunky than Media 100 but it's pretty close. So I could pick up Edius and it will edit natively with avchd video and import this basically using the same video file, not rendering out a huge QuickTime work file.

Edius 6.5 is about $700, so it would be cheaper to get Edius and work with that than buy disk drives. The problem however is I'd have to get a PC laptop, with an i7 processor. Above 3 gigahertz in speed as well and likely with 8 or 16 gigs of memory. These computers cost $1200. Why a laptop? Well I know a desktop is more flexible and you can use cheaper drives, build drive arrays and have a full production setup at home. But my house is a mental mess with many demands from sick people here. I can't really concentrate on editing with an edit suite at home. So I often edit at a coffeehouse or place where I'm eating at. It requires a laptop.

So the Edius "solution" is about $1800 more.  I'm rather doubtful it would run on my cheap $350 windows PC I use for Astrophotography.  Keep in mind this is theory.   I'm not writing this to say I will do this. And the move to 3d, if I was to purchase a few 3d cameras would cost me at least $3000 without accessories(extra batteries) for three cameras. Add batteries and that's probably another $1000 or more, if your doing remote shoots. Add other equipment like wireless mics or interview mics and that costs more. I have the microphone things figured out for interviews.  So to do the Festival right, and do it quickly in 3d without a ton of disk space necessary we are talking about spending $6k to $7k for equipment.  And even SIMM Cards could be more expensive depending on the choice of 3d cameras.

Then I'd have to probably hire a crew to do the shoot as all my old friends are too old and tired of shooting Christian festivals for free or most Christian festival promotion, at least that is what they say. So we are talking about a $10k investment for video for a hobby event. Well needless to say, as the total receipts of the Audio Feed festival are probably less than $40,000 at $40 per head with 1000 people, I could spend as much as 25 percent of the gross receipts of the festival just gearing up for a "hobby video shoot".   And this is with "low end" equipment.  This is why most videographers are broke, especially the low end (hobby) videographers.  There are cheaper ways to do the event.

I can use basic Sony 2d camcorders which are cheap. I can edit in 2d and I can take my time editing.

If the lighting guys could be controlled a little bit perhaps giving them a video monitor and monitor feed so they can see what the video looks like, then we'd be able to get better results without expensive camcorders and it would improve fan video of the concert events.

The "data" edit approach to use for the festival would be to take short clips of each thing and only digitize the short clips.  Maybe a two song clip instead of a full concert clip.   Unfortunately for long concert video meaning making a video for the bands, I tape as much of each concert with full clips as possible. Long clips mean I have more to work with and I may capture something magical that would have been missed by only taping part of the event.   Also if the band is bad or out of tune for some of the songs, if we tape them all we can select the best clips.  This is not possible if you're choosing a song or two to record on the fly.   The challenge of course personally is to stop the flow of many songs out, and "not publish them" or not use them.  People will beg and ask for video clips or full concert videos and you have to keep your priorities straight and show restraint and respect the artist and labels rights.

WITH TAPE we could capture a "part of the clip"
With tape based systems in older nonlinear setups, we would digitize a part of the clip perhaps playing the tape and recording only a part of it into the nonlinear.  It was easier to do a "live" or after live event mix with a mixer, to get full events.  You cannot beat the speed of a live mix for multicamera events.   Nothing beats a good video mix.   For careful edits and product afterwards an editor of course is much better.

(In the old days of tape and nonlinear I could just digitize one song from each camera.  That takes less disk space.  There were other limits as well.  Often times the length of the clip you could digitize was limited by the size of the files.  So it's a lot better now for nonlinear systems and it's tough to beat digital system.  What is amazing is I have maybe 25 gigs of drive space maybe as much as 40 gigs of drive space for my old nonlinear systems and they could hold a lot of video.  But each time I did a long project I would often create a master output of the project and just go back in and erase all the source.  It was kind of freeing, a sort of housecleaning to start over.)

Also if I edited in lower resolution I could save space and also perhaps digitize the video quicker.

Because my approach is to get video I can give to the bands this hampers a quick edit of all the video. By trying to give nice long event recordings to the bands, I've handicapped my ability to get a fasted edit completed.

That's the nature of budgets.

Now in the old days I used to have a nice high paying job in the IT field. But I decided to take an early retirement and help at the house with sick people with long term illness in the family. This is both a blessing, well it's mostly a blessing. I'm able to help out. But I'm on an extremely fixed budget now. I don't have an extra $1000 each month to throw around on equipment or give away to ministries.

Unfortunately the retirement life, for me is much less relaxing than it should be. In theory I should have a lot of fun, but I'm hampered by major health issues which is like running an ICU at home. With only two care givers it basically stopped our small family from most normal life.

Well enough venting. Things are really going well with The Choir edit. I almost have a decent full length edit complete. It's a little rough especially with some of the blown out lighting from to intense a par/spot light effect at times. It kind of reminds me of some of the poor video I shot of Adam Again back in the early 1990s at Cornerstone. I have some early Choir and Adam Again from 1992. And that stuff is pretty bad. That due to my having poor exposure and blown out faces. And I've seen that problem with some older main stage video we shot as well with two spotlights on Mike Roe for example blowing out his face and making it difficult to get an exposure that showed the band the what it should have been.

Our video crew used to curse spotlights. For video lighting you need even lighting with 30 levels of contrast. The human eye can see 110 levels of contrast. Film can see 100 levels. Video only 30 levels.

If the video guys are not in the loop for lighting of an event, the video of concerts or most things will suffer.

If you want professional video of course the video director has to come in and exercise creative control over more of the event. This happens with concert videos shot for co medial productions. We can't get to that level, without a budget. We can at times get closer to that level and provide a friendlier event for those with cameras or camcorders if they are allowed.

One way to do this is become a part of event video process and use live giant screen projection. When the lighting guys see the effects of the lighting for the live video they may modify their lighting setups without much begging or discussion.

Humans more often than not don't notice the difference at an event that is properly fill lighted verses a complete visual range done on the cheap or for the naked eyes.

Below, the drives for Cornerstone and Audio Feed Festival editing.


- Posted using BlogPress from my iPad

No comments:

Post a Comment